[Subscribe Now] Track A-Level Transparency Project Biweekly Report and Discover the Top 1% of Projects
API Download the RootData App

Huang Renxun's latest podcast transcript: The future of Nvidia, the development of embodied intelligence and agents, the explosion of inference demand, and the public relations crisis of artificial intelligence

Mar 21, 2026 20:55:40

Share to

Video Title: Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis
Video Author: All-In Podcast
Translation: Peggy, BlockBeats

Editor's Note: As the AI narrative continues to heat up, the focus of market discussions is shifting from "how powerful the models are" to "how the systems are implemented." Over the past two years, the industry has experienced breakthroughs in large model capabilities, a race for training computing power, and the expansion of generative applications. However, as these stages gradually become consensus, new questions arise: when AI is no longer just answering questions but starts executing tasks, embedding into enterprise processes, and entering the physical world, what are the underlying conditions that support its continued advancement?

This article features excerpts from a conversation on the well-known tech podcast All-In Podcast. As one of the most influential investor podcasts in Silicon Valley, the show is co-hosted by four investors who have been active on the front lines, known for their in-depth discussions on technology, business, and macro trends.

The four hosts of the show are:

·Jason Calacanis, an early internet entrepreneur and angel investor, widely known for investing in companies like Uber and Robinhood;

·Chamath Palihapitiya, founder of Social Capital and former Facebook executive, who has invested in several tech companies including Slack and Box;

·David Sacks, partner at Craft Ventures, a member of the "PayPal Mafia," founder of Yammer, which was sold to Microsoft for about $1.2 billion, and an early investor in Airbnb and Uber;

·David Friedberg, founder of The Production Board, focusing on investments in agriculture, climate, and life sciences, and founder of The Climate Corporation (later acquired by Monsanto).

This episode's guest is Jensen Huang, co-founder and CEO of NVIDIA, regarded as one of the key drivers in the current wave of AI infrastructure.

From left to right: David Friedberg, Chamath Palihapitiya, David Sacks, Jensen Huang, Jason Calacanis

The entire interview can be summarized in three layers.

First, AI infrastructure is changing. In the past, the market's understanding of AI was largely based on stronger GPUs and more data centers. However, Huang wants to emphasize that future competition will no longer be just about individual chips, but about entire systems. As inference demand rises, model varieties increase, and agents begin to handle more complex tasks, AI computing is transitioning from a relatively singular model to a more complex and specialized system collaboration. NVIDIA is thus attempting to shift its role from a chip company to a builder of "AI factories."

Second, AI is moving from "generating content" to "completing tasks." This is the most critical thread in this interview. ChatGPT has allowed the public to intuitively feel the capabilities of AI for the first time, but in Huang's view, the real change is that AI is beginning to enter workflows in the form of agents: it is not just answering questions but can invoke tools, break down tasks, and collaborate to truly get things done. Consequently, users' willingness to pay for AI will shift from "getting an answer" to "getting a result." This implies greater inference demand, higher system complexity, and potentially rewriting the ways software development, organizational management, and knowledge work are conducted.

Finally, AI is extending from the digital world to the real world. In the interview, whether discussing autonomous driving, robotics, healthcare, digital biology, or Huang's mention of Physical AI, they essentially point to the same trend: the value of AI is not only reflected on screens but will increasingly manifest in factories, hospitals, cars, terminal devices, and daily life. However, this also means that the challenges AI will face next will not only be technical but also include supply chains, policies, regulations, manufacturing capabilities, and geopolitical complexities. In other words, the next round of AI expansion will be a truly industrialization process.

From this perspective, what is most worth noting in this conversation is not a specific product or an optimistic number, but a judgment that Huang repeatedly conveys: AI is transitioning from the "model era" to the "system era." Future competition will not just be about whose model is larger or whose computing power is stronger, but about who understands the industry better, who can embed AI deeper into real processes, and who can organize these capabilities into a runnable and scalable system.

This also expands the scope of discussion beyond NVIDIA itself. The real question it seeks to answer is: as AI gradually becomes infrastructure, how will the next round of industrial restructuring unfold, and where will new value be created?

TL;DR

·AI infrastructure is transitioning from "single GPU" to decoupled architecture. Different computing tasks will be collaboratively completed by GPUs, CPUs, network chips, and inference chips like Groq.

·NVIDIA is transforming from a GPU company to a complete system provider, an "AI factory company." It sells the entire infrastructure rather than a single chip.

·The key to measuring AI costs is not the cost of data centers but the cost of tokens and throughput efficiency. More expensive systems may actually produce cheaper tokens.

·AI is moving from generative models to the Agent era. Users are willing to pay for "getting things done" rather than just getting answers.

·Computing demand is exploding. From generation to inference to agents, it may have expanded over 10,000 times in a short period and is still accelerating.

·Future software development will change. Engineers will no longer just write code but will define problems, design architectures, and collaborate with agents.

·In the long run, the biggest opportunities lie in deep specialization in vertical fields rather than in general models themselves. Who understands the industry better will have a competitive moat.

Interview Transcript

Jason Calacanis (notable angel investor | All-In Podcast host | early investor in Uber):
This week is a special episode. We let our regular weekly show "make way" for this, and we usually only do this for three types of people: President Trump, Jesus, and Jensen Huang (founder and CEO of NVIDIA). As for how to rank these three, you can decide for yourselves. Your momentum has been incredible lately, and this GTC was very successful.

Jensen Huang (CEO of NVIDIA):
The whole industry came. Almost all tech companies and AI companies were there.

Jason Calacanis:
It's unbelievable, truly extraordinary. One of the most significant releases in the past year is Groq. When you acquired Groq, did you realize how much this would make Chamath "unbearable"?

Note: Groq is not Grok. The former is a company that makes AI inference chips and inference clouds, while the latter is a chatbot from xAI. At the end of 2025, Groq reached a non-exclusive inference technology licensing agreement with NVIDIA, with the official transaction amount undisclosed; however, there were reports and speculations ranging from $17 billion to $20 billion. By GTC 2026, Huang further showcased the inference system integrated into the NVIDIA platform based on Groq technology.
The Chamath mentioned here refers to Chamath Palihapitiya (founder of Social Capital | former Facebook executive | All-In host). He is one of the four hosts of All-In and was also an early investor and board member of Groq. Therefore, when the significant deal between NVIDIA and Groq surfaced, it was seen as Chamath hitting another key project.

Jensen Huang:
I had a vague premonition.

Jason Calacanis:
We have to deal with him every week.

Jensen Huang:
I know. You all have to accompany him through a full six-week closing period.

Jason Calacanis:
That's right.

From GPU Company to "AI Factory" Company

Jensen Huang:

In fact, many of our strategies are announced years in advance at GTC. Two and a half years ago, I introduced the operating system for AI factories, called Dynamo.

You know, a dynamo is a device invented by Siemens that can convert the energy of water into electrical energy, driving the factory system during the last industrial revolution. So I think this name is very suitable as the name for the "factory operating system" in the next industrial revolution. One of the core technologies in Dynamo is decoupled inference.

Jason Calacanis:

Jensen, I know you understand technology very well. Come on, define it. I don't want to steal your thunder.

Jensen Huang:

Thank you. Decoupled inference means that the entire processing pipeline for inference is extremely complex, possibly the most complex type of computing problem today.

Its scale is astonishing, containing a large number of different forms and scales of mathematical computations. Our idea is to break the entire processing flow apart, allowing one part to run on one type of GPU and another part to run on another type of GPU. Furthermore, this also made us realize that perhaps decoupled computing itself is a reasonable direction: we can fully enable different types and natures of computing resources to work together.

The same thinking later guided us to Mellanox. You see today, NVIDIA's computing is already distributed across GPUs, CPUs, switches, vertical scaling switches, horizontal scaling switches, and network processors. Now, we also want to add Groq.

Our goal is to place the right workloads on the right chips. In other words, we have evolved from a GPU company to an AI factory company.

David Sacks (partner at Craft Ventures | former PayPal COO | All-In host):

To me, this is probably the most important insight. What you are seeing is a fundamental "decoupling." In the past, there was only the option of GPUs, but now more and more different computing forms are emerging, and these choices will coexist in the future.

You mentioned one point on stage that I think everyone doing high-value inference should listen to carefully: you said that about 25% of the space in data centers should be allocated for Groq's LPU.

Note: LPU stands for Language Processing Unit. This is a category of chip proposed by Groq, primarily focused on inference rather than training.

Jensen Huang:

Yes, in data centers, Groq could account for about 25% of the Vera Rubin system.

Note: Vera Rubin is NVIDIA's next-generation AI platform architecture. It is not a single chip but a system-level infrastructure platform aimed at AI factories.

David Sacks:

Can you talk about how the industry currently views this direction? Essentially, you are building the next generation of decoupled architecture: prefill, decode separation, and the inference process being split. How do you think people will react?

Jensen Huang:

Let's take a step back. The reason we added this capability to the system is that the entire industry has shifted from processing large language models to Agentic Processing.

When you run an agent, it will access working memory, long-term memory, and invoke tools, which puts a lot of pressure on storage. You will also see agents collaborating with agents. Some agents use large models, while others use small models; some are diffusion models, and some are autoregressive models. In other words, within this data center, there will be various completely different types of models coexisting. We built Vera Rubin to handle this extreme diversity of workloads.

So, in the past, we were a company with "one rack," and now we have added four types of racks. In other words, NVIDIA's TAM, or total addressable market, has suddenly expanded by about 33% to 50%.

A large portion of this new 33% to 50% will be storage processors, namely BlueField; a part, which I personally hope will be a significant portion, will be Groq processors; and another part will be CPUs; of course, there will also be many network processors. All of these combined will ultimately run the "new type of computer" in the AI revolution, which is agents. It is the operating system of modern industry.

Chamath Palihapitiya (founder of Social Capital | former Facebook executive | All-In host):

What about embedded applications? For example, if my daughter's teddy bear wants to talk to her, what would be inside? A custom ASIC? Or will there be a broader TAM in edge and embedded scenarios, with different tools for different scenarios?

Note: ASIC stands for Application-Specific Integrated Circuit, and TAM stands for Total Addressable Market.

Jensen Huang:

We believe there are actually three computers in this question.

The first, at the largest scale, is used to train AI models, develop AI, and create AI.

The second is the computer used to evaluate AI. For example, look around; there are robots and cars everywhere. You must first place them in a virtual environment that can represent the physical world for evaluation. In other words, this software itself must comply with the laws of physics. We call this system Omniverse.

The third is the computer deployed on the edge, which is the robot computer. It can be an autonomous vehicle, a robot, or even a small teddy bear.

For devices like teddy bears, one very important direction we are working on is turning telecom base stations into part of AI infrastructure. This way, the entire $2 trillion telecom industry will gradually become an extension of AI infrastructure. So, radio equipment will become edge devices, factories will become edge devices, and warehouses will too.

In summary, all three types of foundational computers are essential.

David Friedberg (founder of The Production Board | All-In Podcast host):

Jensen, I felt last year that you saw this coming before anyone else. You said the growth in inference demand wouldn't just be 1,000 times.

Jensen Huang:

Did I dig my own grave?

David Friedberg:

But it could grow 1 million times? 1 billion times? Right?

I think many people thought that was too exaggerated at the time because the whole world was still focused on training expansion. But now you see, inference has truly exploded and is starting to become "inference constrained." You have now released an "inference factory" that will have throughput ten times higher than the next generation factory.

But if you look at external discussions, many people will say: your inference factory will cost $40 billion to $50 billion, while alternatives like custom ASICs, AMD, etc., only cost $25 billion to $30 billion, so you will lose market share.

So why don't you just tell us: what exactly do you see? How do you view market share? Is it worth it for these customers to pay nearly double the premium?

Why More Expensive Systems Can Produce Cheaper Tokens

Jensen Huang:

The most important point, the core point is: do not equate the price of the factory with the price of tokens, nor should you equate it with the cost of tokens.

It is very likely, and I can prove, that a $50 billion factory can produce the lowest-cost tokens. The reason is that our efficiency in generating these tokens is astonishingly high, up to 10 times higher.

You see, the difference between $50 billion and $20 billion is largely just land, power, and the shell of the factory. Besides that, you would still need to buy storage, networking, CPUs, servers, cooling systems. So whether the GPU itself is at full price or half price will not drop the total cost from $50 billion to $30 billion. Pick any number you like; realistically, it might only drop from $50 billion to $40 billion.

And if a $50 billion data center has ten times higher throughput, then that price difference is actually not significant.

Jason Calacanis: Got it.

Jensen Huang:

That's why I always say: even for many chips, if you can't keep up with the technological frontier and the speed at which we are advancing, then even if the chips are given away for free, they still won't be cheap enough.

David Sacks:

I want to ask a more macro strategic question. You are now running the most valuable company in the world. Next year's revenue could exceed $350 billion, with free cash flow of $200 billion, and it is still compounding at a crazy rate.

How do you make decisions? How do you gather information? Everyone knows about your famous email system, but how do you actually form intuition, shape the market, decide where to double down, where to scale back, and where to enter new fields? How does that information get to you? How do you make the final judgment?

Jensen Huang:

That is the job of a CEO.

David Sacks:

Right.

Jensen Huang:

Our responsibility is to define the vision and define the strategy. Of course, we draw inspiration and information from the outstanding computer scientists, technical experts, and countless excellent employees in the company, but ultimately, shaping the future is our responsibility.

One of the criteria is: is this thing ridiculously difficult? If it is not difficult enough, we should stay away from it. The reason is simple: if something is easy to do, there will definitely be a lot of competitors.

Is it something that has never been done before and is ridiculously difficult? Does it happen to mobilize our company's unique "superpowers"? So I have to look for that intersection: it must meet these criteria simultaneously.

And you also have to know that doing such things will definitely come with a lot of pain and torment. No great invention has ever happened because it was too simple and succeeded easily on the first try.

If something is super difficult and has never been done before, it basically means you will go through a lot of pain and suffering. So you better enjoy the process.

David Sacks:

Can you pick three or four more "long-tail" businesses to talk about? For example, you mentioned data centers in space, ADAS and cars, and the biological direction. Give us a sense: when will these curves start to turn upward? How do you view these long-term businesses?

Note: ADAS stands for Advanced Driver Assistance Systems.

Jensen Huang:

Of course. Physical AI is a huge category. As I mentioned earlier, we have three computing systems and all the software platforms built on them. Physical AI is the first real opportunity for the tech industry to serve a $50 trillion industry that has seen almost no deep technological transformation in the past. To do this, we must reinvent all the necessary technologies.

I have always believed this is a 10-year journey. We started ten years ago, and now we are finally seeing it begin to turn upward. For us, this has already become a multi-billion dollar business, and the current scale is approaching $10 billion annually. So it is already a significant business and is growing exponentially. That is the first point.

The second direction is that I believe we are very close to the ChatGPT moment in digital biology.

We are gradually learning how to represent and understand genes, proteins, and cells. We already know how to handle chemicals. Therefore, being able to represent and understand the basic components of biology and their dynamic behaviors is something I believe will happen within two to five years. Within five years, I am very confident that digital biology will have a huge impact on the entire healthcare industry.

These are all very important directions. Agriculture is also one of them.

Chamath Palihapitiya:

It is already happening.

Jensen Huang:

Without a doubt.

Jason Calacanis:

I want to shift the topic from data centers back to the desktop. The company was largely built on enthusiasts, gamers, and graphics card users. Today, you mentioned Claude Code, OpenClaw, and the revolution brought by agents in front of about 10,000 viewers.

Especially among the enthusiast community, we see a lot of energy and innovation actually exploding there, with many breakthroughs happening on the desktop. You also released a desktop device this time; I remember it was the Dell 60800? This is a very powerful workstation that can run local models and has 750GB of memory. Now Mac Studio is sold out everywhere. Our company is now fully transitioning to OpenClaw. Friedberg is using it, Chamath is using it, and everyone is obsessed.

What does this open-source agent movement that started with enthusiasts and the desktop open-source ecosystem mean to you? Where is it headed?

The Age of Agents Has Arrived: Why Computing Demand Will Expand by Another 10,000 Times

Jensen Huang:

First, let's take a step back. Over the past two years, we have actually seen three turning points.

The first is generative AI. ChatGPT brought AI into the public eye, making everyone aware of its importance. In fact, this technology was already clearly there months before ChatGPT appeared. It was only when ChatGPT provided a user-friendly interface that generative AI truly exploded.

Generative AI, as you know, generates tokens for both internal and external consumption. Internal consumption is essentially "thinking," which further drives the development of inference.

Next, more grounded capabilities based on real information began to emerge, allowing AI not only to answer questions but to provide more reliable and useful answers. You also began to see OpenAI's revenue and business model rise sharply.

Then, the third turning point was initially only visible within the industry, which is Claude Code. This is the first truly useful agentic system, highly revolutionary.

But before Claude Code, this capability was mainly aimed at enterprises, and many outsiders had never seen it. Until OpenClaw brought "what AI agents can actually do" into the public eye.

Thus, the cultural significance of OpenClaw lies in the fact that it truly made the public aware of the capabilities of agents for the first time.

The second reason it is important is that OpenClaw is open.

More critically, it constructs a completely new computing model, almost reinventing computing itself. It has a memory system: scratch is short-term memory, and the file system is long-term resources; it has scheduling capabilities; can run cron jobs; can generate new agents; can break down tasks, perform causal reasoning, and solve problems; it also has an I/O subsystem that can input, output, and connect to WhatsApp; it has an API that can run different types of applications, known as skills.

These four elements essentially define a computer. So, we now actually have for the first time: a personal AI computer.

And it is open-source, truly open-source, and can run almost anywhere. This is the blueprint for modern computing. In a sense, it has already become the operating system of modern computing and will be ubiquitous in the future.

Of course, we also have to help it solve one thing: as long as you have agentic software, it may access sensitive information, execute code, and communicate externally. So we must ensure that: all of this must be governed, must be secure enough, and must have strategic constraints, allowing these agents to possess two of the three capabilities but not all three simultaneously.

In governance, we have also made contributions. Peter Steinberger is here today. We have many great engineers working with him to help make this system safer and more robust, ensuring it can protect privacy and security.

Chamath Palihapitiya:

Jensen, has this paradigm shift made many AI regulatory bills passed across the U.S. seem outdated?

Many proposals were originally based on old models. Can you talk about how quickly this paradigm shift has rendered a large number of existing regulatory ideas ineffective? AI regulation has now become a very hot topic in U.S. politics.

Jensen Huang:

In this regard, we must always stay ahead of policymakers, and you have done very well in this area. We must proactively approach them and tell them what stage technology has reached, what it is, and what it is not. It is not a living entity, not an alien, and it has no consciousness. It is computer software.

Also, we often hear the statement "we completely do not understand this technology." But that is not true; we actually understand a lot. So first, we must continuously provide policymakers with real information; we should not let doomsday theories and extremism shape their understanding of this technology.

At the same time, we must also acknowledge that technology is developing rapidly and not let policy run too far ahead of technology. From a national perspective, my biggest concern is: the greatest national security risk for the U.S. in AI is not AI itself, but that other countries are adopting AI while we, due to anger, fear, or paranoia, are unwilling to let our industries and society embrace AI.

So, what I am truly most worried about is that AI is not spreading fast enough in the U.S.

David Sacks:

Let me ask a follow-up. If you were sitting in the boardroom of Anthropic, watching their turmoil with the "Department of War," what would you think? This actually continues the point you just made: people do not know how to understand AI, leading to another layer of resentment, fear, and distrust. If it were you, what would you advise Dario and his team to do differently to change today's outcomes and public perception?

Jensen Huang:

First, I want to say that Anthropic's technology is remarkable. We ourselves are significant users of Anthropic's technology. I greatly admire their emphasis on safety, their commitment to a safety culture, and their technical excellence in advancing this work; it is truly impressive.

Moreover, they want to remind the public of the limits of this technology's capabilities, which I think is a good thing. We just have to realize that the world has a spectrum: reminders are good, but scaring people is not so good.

Jason Calacanis: Right.

Jensen Huang: Because this technology is too important for us. I think predicting the future is certainly possible, but we need to be more cautious and humble. Because in fact, we cannot fully predict the future.

If we throw out some very extreme, catastrophic judgments without evidence that these things will actually happen, the harm it causes may be greater than people imagine.

And now, we are already the leaders in the tech industry. In the past, no one listened to us, but now it is different. Technology has deeply embedded itself in the social structure, is an extremely important industry, and is highly related to national security. Every word we say is important.

So I think we must be more cautious, restrained, balanced, and thoughtful.

David Friedberg:

I would nominate you to do this. The public support for AI in the U.S. is only 17%. We have already seen what happened in the nuclear energy sector: we basically shut down the entire nuclear industry, and now China is building 100 fission reactors while the U.S. has none. Now we are also starting to hear voices about pausing data centers and the like. So I think we must be more proactive.

But I want to return to what you said about the agent explosion happening within the company: efficiency improvements, productivity increases. Now many people are debating ROI, right? You and I entered this year with the biggest question: will revenue appear? Will revenue expand like intelligence itself? Then we saw something akin to an "Oppenheimer moment": Anthropic's revenue reached $5 billion to $6 billion in February alone.

Note: The "Oppenheimer moment" refers to J. Robert Oppenheimer, the head of the Manhattan Project (the secret research project that developed the atomic bomb during World War II). The first detonation of an atomic bomb in 1945 symbolizes a critical point where technological breakthroughs coexist with risks, and it is now often used to refer to key technological moments with irreversible impacts.

How do you see the trend moving forward? You mentioned today that Blackwell and Vera Rubin have already shown visibility for trillion-dollar demand in the coming years. Coupled with the momentum shown by Anthropic and OpenAI, do you think we have already reached that curve, and will we see revenue accelerate like intelligence?

Jensen Huang:

I will answer from a few angles. Look at this audience; Anthropic and OpenAI are indeed here. But in reality, 99% of what is present is AI, and it is neither Anthropic nor OpenAI. The reason behind this is that AI itself is extremely diverse.

I would say that as a category, the second most popular model is actually open models. The first is, of course, OpenAI, open-source weights, open-source models, and this entire broad open ecosystem. The second is open models, and there is a significant gap between it and the third, which is Anthropic.

This shows how large the scale of all these AI companies combined is, so we must first recognize this.

Returning to computing demand. When we move from generative AI to inference, the required computing demand increases by about 100 times; when we move from inference to agentic, the computing demand may increase another 100 times. In other words, in just two years, computing demand has likely increased by about 10,000 times. At the same time, people will pay for information, but what they are truly willing to pay for is the work results.

David Friedberg: Right.

Jensen Huang:

Having a conversation with a chatbot and getting an answer is certainly good. Helping me do research is also great. But what truly makes me willing to spend money is when it helps me get things done. And that is exactly where we are now; agentic systems are actually completing work. They are helping our software engineers finish their tasks.

So think about it: on one side, there is 10,000 times more computing, and on the other side, there is possibly 100 times more consumption demand. Moreover, we have not even truly started large-scale expansion. We are definitely on the path to 1 million times growth.

Jason Calacanis:

I think this leads perfectly to a question: how many people does your company have?

Jensen Huang:

We have 43,000 employees, about 38,000 of whom are engineers.

Jason Calacanis:

We often discuss a topic on the podcast: oh my, the token usage in our company is skyrocketing. Some even ask, "How many tokens can I get when I join a company?" because they want to become efficient employees. I remember you mentioned in that two-and-a-half-hour keynote, which was really long but great.

Jensen Huang:

Thank you. It could have been shorter.

Jason Calacanis:

You mentioned that the token usage limit for each engineer could be around $75,000. Does that mean NVIDIA's engineering team spends $1 billion or $2 billion on tokens each year?

Jensen Huang:

That's how we think about it. Let me give you a thought experiment: suppose you hired a software engineer or AI researcher with an annual salary of $500,000, which is quite common for us.

At the end of the year, I ask him, "How much did you spend on tokens this year?" If he says "5,000 dollars," I would be blown away, really. If an engineer with a $500,000 annual salary consumes tokens worth less than $250,000 in a year, I would be very concerned. This is no different from a chip designer saying, "I decided to only use paper and pencil; I don't need CAD tools."

Jason Calacanis:

This is truly a paradigm shift. Your understanding of these top employees almost reminds me of what is taught in MBA classes about LeBron James: he spends $1 million a year maintaining his body, so he can still play at 41. Why shouldn't these top knowledge workers have "superhuman abilities"?

Jensen Huang:

Exactly.

Jason Calacanis:

If we push this trend forward two or three years, what will the efficiency of these top employees in NVIDIA look like? What can they accomplish?

Jensen Huang:

First, the old notion of "this is too difficult" will disappear. The thought that "this will take too long" will also disappear. The idea that "we need a lot of people" will vanish.

It's like during the last industrial revolution, no one would say, "This building looks too heavy." Nor would anyone say, "That mountain is too big." All thoughts about "too big, too heavy, too time-consuming" will be dissolved.

David Sacks:

What remains is only creativity. What can you come up with?

Jensen Huang:

Absolutely correct. In other words, the future question will become: how do you collaborate with these agents?

Essentially, this is a completely new way of programming. In the past, we wrote code; in the future, we will write ideas, architectures, and specifications; we will organize teams; we will define evaluation criteria, telling the system what constitutes good, bad, and excellent results; we will iterate and brainstorm with it.

That is what you will truly be doing. I believe every engineer in the future will have 100 agents.

Jason Calacanis:

Returning to the PR issue. Entrepreneurs like David Friedberg, using your technology and AI at Ohalo, are really doing very tangible things: increasing food production, improving the supply of high-quality calories. Friedberg, how much do you think this can reduce costs? What impact will this vision have on what you are doing?

David Friedberg:

We just did a zero-shot genomic modeling, and it was successful. At that moment, you would really be amazed. And this happened against the backdrop of "others replacing the entire enterprise software stack overnight."

I did something myself: in 90 minutes, I replaced the entire software stack and a bunch of workflows. I started at 10 PM on Sunday and finished and deployed everything before 11:30 PM.

After I completed it as CEO, I asked all my management team members to do the same exercise over the weekend. By Monday, the result we saw was: it was done.

To be more technical and scientific, we used auto research and a batch of data to accomplish something in 30 minutes. If done through traditional paths, this would have been a PhD-level achievement, possibly taking 7 years, and could have become one of the most respected doctoral works in the field, worthy of publication in Science.

Instead, we simply downloaded auto research from GitHub on a desktop computer, fed in the newly acquired batch of data, and it ran out in 30 minutes. Everyone's expressions changed at that moment. The potential it released is truly incredible.

So I believe this acceleration is expanding everyone's possibilities in unprecedented ways.

But back to the auto research point: what do you think? Achieving such results in a weekend with 600 lines of code, and being able to run locally and handle so many different types of datasets.

Does this indicate that we are still in an extremely early stage in both algorithm optimization and hardware optimization?

Jensen Huang:

The reason OpenClaw is so amazing is that it perfectly coincides with the breakthrough of large language models; it appeared at just the right time.

To a large extent, if it weren't for Claude, GPT, and ChatGPT reaching today's level, Peter probably wouldn't have made this thing. Because the models have indeed reached a very high level.

Secondly, it brings new capabilities: allowing these models to invoke the tools we have created over the years. For example, browsers, Excel; in chip design, it is Synopsys and Cadence; and Omniverse, Blender, Autodesk, etc. And these tools will continue to be used in the future.

Now some people say that the enterprise IT software industry will be destroyed. But let me give you another perspective: the scale of the enterprise software industry has always been limited by "how many butts sit in how many seats," which is the number of seats. But in the future, it will welcome 100 times more agents. These agents will query SQL, query vector databases, and query Blender, Photoshop.

The reason is simple: first, these tools already do very well; second, these tools are essentially "intermediary interfaces" between us and machines. Ultimately, when the work is completed, the results must be presented back to me in a way I can control. And I know how to operate these tools.

So I hope everything will ultimately return to Synopsys, back to Cadence, because that is where I can control and do "deterministic standard" verification.

Note: Synopsys and Cadence are two important EDA (Electronic Design Automation) software companies that all chip companies (NVIDIA, Apple, AMD) basically rely on.

The Next Battlefield for AI: Open Source, Verticalization, and Global Diffusion

David Sacks:

I want to ask a question about open source. Now we have closed-source models that are excellent; there are also open-weight models, many of which are astonishingly strong, especially from China.

Two days ago, you might have been busy going on stage and missed it, but in a cryptographic project called BitTensor's Subnet 3, someone completed a training task: they trained a 4 billion parameter Llama model completely in a distributed manner. A group of random people contributed computing power, but they managed to statefully manage the entire training process. I think this is technically very crazy because the participants were completely randomly dispersed.

Jensen Huang:

It's like Folding@home of our time.

Note: Folding@home is a distributed computing project that allows global volunteers to contribute computer power for protein simulations and medical research.

David Sacks:

Exactly. So how do you see the endgame of open source? Do you see architectures decentralizing and computing power decentralizing, thereby supporting open weights and a completely open-source path, making AI truly widely accessible?

Jensen Huang:

I believe we fundamentally need both: first, models as first-class commercial products, proprietary products; second, models existing in open-source forms.

This is not an A or B relationship, but both A and B must exist. There is no doubt about it. The reason is that models are primarily a technology, not an end product. Models are a technology, not a service.

For the vast majority of users, at that horizontal level, in terms of general intelligence, I actually do not want to fine-tune a model myself. I would prefer to continue using ChatGPT, Claude, Gemini, X. They each have their personalities, depending on my mood and the problem I want to solve. So this part of the industry will develop very well; it will be very prosperous.

However, all the domain knowledge and expertise in these industries must be solidified in a way that they can control, and that can only come from open models. The open model industry is already very close to the forefront. We are also investing heavily.

To be honest, even if open models really catch up to the forefront, I still believe that models as a service, world-class commercial product models, will continue to thrive.

Jason Calacanis:

Almost every startup we invest in now starts with open source and then moves to proprietary models.

Jensen Huang:

Right. And the beauty of it is: as long as you have an excellent router, on day one, every day, you can access the best models in the world. At the same time, this gives you time to reduce costs, fine-tune, and specialize. So you start with world-class capabilities and then gradually build your own moat.

David Friedberg:

Jensen, I want to ask a geopolitical question. Of course, no one wants the U.S. to win the global AI race more than you. But a year ago, during Biden's administration, the diffusion rule was actually preventing U.S. AI technology from spreading globally.

Now the new administration has been in power for a year. How would you rate it? Regarding the global diffusion of AI, are we now an A, B, or C? What is being done well, and what is not?

Jensen Huang:

First of all, President Trump wanted American industries to lead, wanted the U.S. tech industry to lead, wanted the U.S. tech industry to win, and wanted American technology to spread globally, making the U.S. the richest country in the world. He wanted to achieve all of that.

But at this moment, NVIDIA has already lost 95% of its market share in the world's second-largest market, and now it is 0%. President Trump wants us to regain that portion.

The first step is to obtain licenses for those companies we can sell to. Many companies have already submitted applications, and we have also applied for licenses on their behalf, and Commerce Secretary Lutnick has already approved some. Next, we have notified Chinese companies, many of which have already placed purchase orders with us. So we are now restarting the supply chain and sending out goods.

From a higher level, I think we should acknowledge one thing: when we cannot obtain micro motors, rare earth minerals, our national security is weakened; when we cannot control our communication networks, national security is weakened; when we cannot provide sustainable energy for the country, national security is also weakened. Each of these industries is a story I do not want the AI industry to repeat.

As we look to the future and ask, "What does it look like for the U.S. tech industry and U.S. AI industry to truly lead globally," we must honestly say: AI models cannot be monopolized by one American company; that outcome is inherently meaningless.

But we can fully envision: the U.S. tech stack, from chips to computing systems to platforms, being widely adopted globally. People around the world can build their own AI, public AI, private AI on top of this U.S. tech stack, serving their societies. I hope the U.S. tech stack can cover 90% of the world. I truly hope so.

Otherwise, if the final situation becomes like solar energy, rare earths, magnets, motors, communication devices, I would consider that a very bad outcome for U.S. national security.

Chamath Palihapitiya:

How closely are you monitoring global conflict situations now? How worried are you? For example, the Middle East could affect helium supply, which poses a potential supply chain risk for semiconductor manufacturing. How concerned are you about these issues? How much energy are you investing in this?

Note: Helium is crucial for semiconductor manufacturing; it is irreplaceable in key processes like lithography and inspection. As a non-renewable resource, its supply is highly concentrated, mainly relying on a few sources in the U.S., Qatar (Middle East), and Algeria (North Africa). If these upstream supplies are disrupted, it could directly impact the stable operation of chip production lines.

Jensen Huang:

First of all, regarding the Middle East, we have 6,000 families there. Many employees in the company are from Iran, and their families are still in Iran. So we have many families there.

The first thing is: they are very anxious, very worried, and very scared right now. We have been thinking about them and monitoring the situation closely. They will receive our full support. Some have asked me whether we will continue to stay in Israel given the current situation in the Middle East. My answer is: we will 100% stay in Israel. We will 100% support the families there. We will 100% continue to be in the Middle East.

Some have also asked, given the situation in the Middle East, do we still think it is worth expanding AI there? My view is: wars happen because everyone wants a more stable outcome. And I believe that after the war, the Middle East will be more stable than before. So if we were willing to consider it before the war, we should be even more serious about it after the war. So on this issue, I am also 100% invested.

We have three things we must do. First, we must quickly re-industrialize America, whether it is chip manufacturing plants, computer manufacturing plants, or AI factories.

Jason Calacanis:

How is the progress in this regard?

Jensen Huang:

The progress is very good. The reason we can advance at an astonishing speed in Arizona, Texas, and California is that we have received strategic support, friendship, and help from the Taiwanese supply chain. They are truly our strategic partners. They deserve our support, our friendship, and our generosity. They are also doing their utmost to help us accelerate the manufacturing process.

Second, we must diversify the manufacturing supply chain. Whether it is South Korea, Japan, or Europe, we need to spread the supply chain to make it more resilient. Third, while we enhance diversity and resilience, we must also maintain restraint and not apply unnecessary pressure.

Jason Calacanis:

You mean to be patient.

Chamath Palihapitiya:

What about helium? Many reports have mentioned this issue.

Jensen Huang:

I think helium could be a problem. But on the other hand, there are usually quite a few buffer stocks in the supply chain, and such systems generally leave some margin.

Jason Calacanis:

You have made significant progress in autonomous driving and released major news. You have added many partners, including Uber. Recently, I saw you in a video driving a Mercedes autonomously. You and Uber also announced that you would deploy more cars on the road with many manufacturers.

I understand your bet is that there will be an open platform similar to Android in the future, and you will play a key role in serving dozens of car manufacturers; on the other hand, there may be a closed system like iOS, such as Tesla or Waymo.

What is your strategic thinking? How will this chess game unfold? Because it feels like you are collaborating in some areas while competing in others, and your stack is very deep.

Jensen Huang:

First, we believe that everything that will move in the future will eventually achieve full or partial autonomy. Second, we do not want to build autonomous vehicles ourselves, but we want to empower every car company in the world to build autonomous vehicles.

So we have built three computers: training computers, simulation computers, evaluation computers, and vehicle-side computers. We have also developed the safest driving operating system in the world.

At the same time, we have created the world's first autonomous driving system with reasoning capabilities. It can break down complex scenarios into simpler ones and navigate through them one by one, just like a reasoning model. This reasoning system is called Alpamayo, and it has achieved very impressive results.

We will do vertical optimization and horizontal innovation; then let each manufacturer decide for themselves. Do you just want to buy one of our computers? Like Elon and Tesla, they would buy our training system; or do you want to buy both the training system and the simulation system? Or do you want to work with us to integrate all three, even putting the vehicle-side computer into your car?

Our attitude has always been that we want to solve problems, but we do not insist that we must provide the only answer. No matter how you choose to cooperate with us, we are very happy.

David Sacks:

Following up on this question, I find it particularly interesting. You are essentially building a platform that allows a thousand flowers to bloom. But indeed, some flowers now want to go down, go to the bottom of the stack, and try to compete with you. Google has TPU, Amazon has Inferentia and Trainium, and almost everyone is working on their own "I can surpass NVIDIA" version, even though they are also your big customers.

How do you handle this relationship? What do you think will happen in the long run? What role will these products ultimately play in the entire ecosystem?

Jensen Huang:

This is a very good question.

First, we are the only true AI company. We build foundational models ourselves and are at the forefront in many areas. We construct every layer of the stack from top to bottom. We are also the only AI company in the world that collaborates with all AI companies.

They never show me what they are doing, but I always clearly tell them what I am doing. So our confidence comes from one point: we are very willing to compete on "whose technology is better." As long as we can continue to run fast, I believe that continuing to procure from NVIDIA will still be one of their most economical choices. I am very confident about this.

Second, we are the only architecture that can be deployed on all cloud platforms. This brings fundamental advantages. We are also the only architecture that can be taken down from the cloud and placed in local data centers, cars, any region, or even in space.

So, there is actually a large portion of our market, about 40% of our business. If you do not have the CUDA stack and cannot provide a complete AI factory, customers simply do not know how to cooperate with you. They do not want to buy chips; they are building AI infrastructure. So what they need is: you come in with a complete stack, and we just happen to have a complete stack.

So, surprisingly, if you look at it now, NVIDIA's market share is actually still increasing.

David Sacks:

What you mean is that these companies tried it out, and in the end, they found, "Oh my, this is too complicated," and then they came back? So your share continues to grow?

Jensen Huang:

There are several reasons for the growth in share.

First, our pace of advancement is too fast. Second, we have made everyone realize: the problem is not making chips, but making systems, and this system is extremely difficult to create. So their cooperation with us is still increasing.

Take AWS as an example; I remember they just announced yesterday that they plan to buy 1 million chips in the coming years. This is a very large procurement volume, and this does not even include the large number they have already purchased. We are certainly very happy about that.

Additionally, our share growth over the past few years has also been due to Anthropic coming in, Meta also coming in, and the growth of open models being astonishing, all of which are happening on NVIDIA.

So our share is increasing; on one hand, the number of models is increasing; on the other hand, these companies are increasingly moving out of the cloud and growing in regional deployments, enterprise scenarios, and industry edge scenarios.

And that whole market is very difficult to penetrate if you are just making an ASIC.

David Friedberg:

Relatedly, I want to ask, not delving into numerical details, but analysts seem not to believe you.

You say computing power could grow 1 million times, but the market consensus expects you to grow 30% next year, 20% the year after, and by 2029, what should be a year of explosive growth is only 7%. If you plug your TAM into these growth numbers, the implicit meaning is that your share will decline significantly.

So from what you see in the future order book, are there any signs that support this judgment?

Jensen Huang:

First of all, they fundamentally do not understand the scale and breadth of AI.

David Sacks:

Right, I feel that way too.

Jensen Huang:

Most people think AI is just a matter for those five super-large cloud vendors.

Jason Calacanis:

Right.

David Sacks:

There is also an investment orthodox logic that "the larger the scale, the harder it is to sustain growth." They have to go back and explain the model to the risk control committee of the investment bank; they cannot easily believe "five trillion can rise to fifteen trillion." They are willing to give a maximum of seven trillion; beyond that, they cannot accept it.

Jason Calacanis:

They cannot imagine a company with a $10 trillion market cap.

David Sacks:

Essentially, it is a self-preserving modeling; they do not dare to write in things that have never happened in history.

Jensen Huang:

Moreover, you must redefine what you are actually doing.

Recently, someone observed: Jensen, how could NVIDIA possibly exceed Intel in the server market scale? The reason is simple: the entire data center CPU market is about $25 billion a year. And we, as you know, can achieve $25 billion in revenue roughly during the time we are sitting here chatting.

Jason Calacanis:

Nice.

Jensen Huang:

Of course, that is a joke.

Chamath Palihapitiya:

What is said on the podcast does not count as formal performance guidance.

Jensen Huang:

That's right, it does not count as performance guidance. But the key point is: how big you can grow depends on what you are actually building.

NVIDIA is not building chips; that is the first point. Second, just building chips is no longer sufficient to solve the problem of AI infrastructure; it is too complex. Third, most people's understanding of AI is too narrow, limited to the parts they see, hear, and discuss.

OpenAI is very powerful; it will be very large; Anthropic is also very powerful; it will also be very large. But AI itself will be larger than both of them combined. And what we serve is that entire larger part.

David Sacks:

Then explain the "space data center" business to ordinary people. How should it be understood compared to those large data centers on the ground?

Jensen Huang:

We are already in space.

David Sacks:

How should ordinary people understand this business?

Jensen Huang:

First, we should certainly do well with things on the ground, after all, we are currently on the ground. Second, we should also prepare for entering space. There is certainly a lot of energy in space. The problem is heat dissipation. You cannot rely on conduction and convection as you do on the ground; you can only rely on radiation for heat dissipation, which requires a very large surface area. This is not an unsolvable problem; after all, there is plenty of space in space, but the cost is still very high. However, we will explore it.

Moreover, we are already there. Our hardware has been radiation-hardened, and many satellites around the world are already running CUDA. They are doing imaging, image processing, and AI image analysis. This kind of work should be done in space, rather than sending all the data back to the ground for image analysis. So indeed, there is a lot of work that should be done in space.

At the same time, we will also continue to study what a data center in space should look like. This will take many years. That's okay; I have plenty of time.

Robotics, Healthcare, and the Future of Work: How AI Will Ultimately Enter the Real World

Jason Calacanis:

I want to follow up on healthcare.

As we reach a certain age, we start to think about lifespan and healthy lifespan. We all look pretty good; some may look even better. Jensen, I really don't know what your secret is. Is it anti-aging? What things should we not eat? You have to tell me privately about these.

From the perspective of building a healthcare system, where will this direction lead? What progress have we made?

I was just using Claude to analyze what these medical billing codes in the U.S. are all about. The U.S. spends twice as much as others, yet health outcomes seem to be only half.

From what I see, about 15% to 25% of the money is actually spent on the first visit to a general practitioner. To be honest, we all know that today, a large language model can already perform better and more consistently on the first visit.

So what is still lacking to break through regulations and allow AI to truly have a substantial impact on the entire healthcare system?

Jensen Huang:

We are mainly involved in several directions in healthcare.

The first is AI physics, which serves AI biology, using AI to understand and represent biology and its behaviors. This is very important in drug discovery.

The second is AI agents, used in scenarios like assisting diagnosis. OpenEvidence is a great example, and Hippocratic is also a great example. I really enjoy collaborating with these companies. I genuinely believe that agentic technology will fundamentally change the way we interact with doctors and the healthcare system.

The third part is physical AI.

The first part is AI physics, using AI to predict physics; the second part is enabling physical AI to understand physical laws, which can be applied in robotic surgery. This area is already very active. In the future, every instrument you encounter in hospitals, whether it is ultrasound, CT, or any other device, will become agentic.

You can think of it as a security-hardened version of OpenClaw, which will be embedded in every instrument. So in many ways, these devices will directly interact with patients, nurses, and doctors in the future.

Jason Calacanis:

With so much investment in AI weaponry, I really hope we invest a bit more in AI paramedics, AI EMTs, AI paramedics to save lives, rather than just kill.

This also leads us to the topic of robotics. You now have dozens of partners. The robotics field has gone through a strange period over the past ten or even twenty years—Boston Dynamics, Google acquiring a bunch of companies, and then selling or dismantling them. Everyone once thought that robotics was far from being truly usable.

But now, you and top entrepreneurs like Elon Musk are betting on it. Optimus looks incredible, and many companies in China are making rapid progress. How far are we from truly bringing robots into our lives, such as robot chefs, robot nurses, robot babysitters, and humanoid robots that can work in the real world?

Especially in China, they seem to be doing just as well as the U.S., if not faster. Based on the progress of your partners and the maturity of the technology, how much longer do you think it will take?

Jensen Huang:

To a large extent, the robotics industry was invented by us, or you could say it was invented in the U.S. You could also say we entered the market too early. We were about five years ahead of the truly critical "brain" enabling technology, so we got tired and lost patience first.

But now, it has truly arrived. The next question is just: how long will it take to go from "high-functioning proof of concept" to "acceptable commercial product"?

Technology never exceeds two to three cycles. Two to three cycles is about three to five years. That's all. In three to five years, there will be robots everywhere.

I think China is very strong, and it is a strength that cannot be underestimated. The reason is that their microelectronics, motors, rare earths, and magnets are all top-notch in the world, which are the foundations of the robotics industry. Therefore, in many aspects, our robotics industry will deeply rely on their ecosystem and supply chain. The global robotics industry will depend on it.

Thus, I believe you will see some very rapid changes.

Jason Calacanis:

Will it ultimately be a one-to-one ratio? Elon seems to think that in the future, there will be one person for each robot—7 billion people for 7 billion robots, 8 billion people for 8 billion robots.

Jensen Huang:

I hope for even more than that. First, there will be a large number of robots working 24/7 in factories; there will also be many robots that are not very mobile but will be slightly active factory robots. Almost everything will ultimately be robotized.

Chamath Palihapitiya:

To me, the most important point about robots is that they will unlock economic mobility for everyone.

In the past, when everyone had a car, they could do many different jobs; in the future, when everyone has a robot, their robot can do many jobs for them. They can open an Etsy store, a Shopify store, and use robots to create anything they want, doing many things they could not do alone. I believe robots will ultimately become the technology that brings prosperity to more people on Earth.

Jensen Huang:

Without a doubt. The simplest reality now is: today we are already short millions of workers. So we are actually in urgent need of robots. If there were more labor, all these companies could grow even faster.

And some of the things you mentioned are really interesting. With robots, we will have "virtual presence." For example, when I am on a business trip, I can enter the body of the robot at home, remotely control it, walk around the house, walk the dog, and check how things are going.

Jason Calacanis:

We need to get the venue staff to clear out soon.

Jensen Huang:

That's right. But think about it; you can really let it roam around the house, see what is happening, talk to the dog, and chat with the kids.

David Friedberg:

This is somewhat like time travel.

Jensen Huang:

At the same time, we will also travel at the speed of light. Obviously, we will send the robot first. I certainly won't send myself first; I will send a robot first to check the situation. Then I will upload my AI.

Chamath Palihapitiya:

This is almost inevitable. It will unlock the Moon and Mars, making them colonizable targets. And this means almost unlimited resources. Bringing materials back from the Moon to Earth can be done with nearly zero energy consumption because you can use solar energy to accelerate. So in the future, you can completely build factories on the Moon to produce everything needed for Earth, and robots are the key to making all of this possible.

Jensen Huang:

In that era, distance will no longer be an issue.

David Friedberg:

Moreover, the more revenue earned from models and agents, the more we can invest in infrastructure; the more robust the infrastructure, the more it will unlock stronger models and agents.

Dario recently mentioned on Dwarkesh's podcast that by 2027 or 2028, model companies and agent companies will earn hundreds of billions in revenue; by 2030, he expects it to reach $1 trillion. Note that this does not even include AI revenue at the infrastructure level.

Jensen Huang:

I think he is being very conservative. I believe that Dario and Anthropic's performance will far exceed that number, far exceed it.

Jason Calacanis:

So from $30 billion to $1 trillion?

Jensen Huang:

Right. And the reason is that he has not considered that I believe every enterprise software company will ultimately become a value-added reseller of Anthropic code, Anthropic token, OpenAI token. This part will significantly expand their GTM scale.

David Sacks:

So in such a world, what is the real remaining "moat"?

Some moats will become almost insurmountable, to be honest. For example, the moat that no one discusses much but is probably the strongest is CUDA; it is an amazing strategic advantage.

But in the future, if models themselves can create great things, the next generation of models may also disrupt it. In your view, what is the most important differentiation for companies building application layers?

Jensen Huang:

Deep specialization.

I believe that in the future, there will be general models integrated into the agent systems of software companies. Many of these models will be commercial models like Claude, proprietary models; but many will also be specialized sub-agents trained by these companies for specific sub-tasks.

David Sacks:

So your call to entrepreneurs is: truly understand your vertical field.

Jensen Huang:

Exactly.

David Sacks:

Understand it deeper and better than anyone else. Then wait for these tools to catch up to you; once the tools catch up, you can inject your knowledge into them.

Jensen Huang:

Right. You have your own knowledge, and you can connect customers to your agents. The earlier you truly connect agents to customers, the sooner this flywheel will start turning, and it will turn very fast.

David Sacks:

This is almost the complete opposite of today's software logic. Today, we first create a piece of software, then think about "what can be generalized," and then sell it to as many people as possible, and finally sell customization as an additional service.

David Friedberg:

And then lock in the customers.

Jensen Huang:

In reality, as you said, we first create a horizontal platform. But you see, all those global systems integrators (GSI) and consulting firms are essentially experts who then customize your horizontal platform into a vertical solution.

Jason Calacanis:

Exactly. And in a sense, the scale of the customization market may be five to six times larger than the platform itself.

Jensen Huang:

Absolutely correct. So I believe that these platform companies themselves have the opportunity to become that expert, to become that player in the vertical field, to become the true master of a specific domain.

Jason Calacanis:

I want to give you the praise you deserve.

I remember three years ago you said: "The ones who will take your job away will not be AI, but those who use AI." Looking back now, our entire discussion has almost revolved around this point: agents are turning humans into "superhumans," expanding business opportunities, and expanding entrepreneurial opportunities. You actually saw this very clearly early on.

Jensen Huang:

You are too kind.

Jason Calacanis:

Of course, we also have to accommodate two ideas at the same time: first, there will indeed be good developments; second, there will indeed be jobs replaced. Then the question becomes: do those people have enough resilience and determination to embrace these new technologies?

For example, if 100% of driving jobs are automated in the future, that will certainly save many lives, which is a good thing; but we must also acknowledge that there are 10 million to 15 million people in the U.S. who rely on this for their livelihood. This change will definitely happen.

Jensen Huang:

I believe jobs will change. For example, today there are many drivers. I believe that in the future, many drivers will still sit in the car, but they will no longer be responsible for driving; instead, they will sit in the back or beside, becoming a kind of "mobility assistant."

Because don't forget, what drivers ultimately do is not just drive. They help you with luggage, handle many things, essentially playing an assistant role.

So I would not be surprised if future drivers become your mobility assistants, helping you handle many other things while the car drives itself.

Jason Calacanis:

Just like in a hotel.

Jensen Huang:

Right. The car is driving itself, but they are still helping you coordinate various things.

David Friedberg:

Autonomous flying planes have also brought more pilots, not pushed them out of the cockpit, even though autonomous flying has already taken on 90% of the work.

Chamath Palihapitiya:

And to be honest, when the car is driving itself, the driver can still do a bunch of other work on their phone, arranging various things for you.

Jensen Huang:

For example, coordinating, communicating, booking, and handling a bunch of tasks.

Chamath Palihapitiya:

The whole pie is getting bigger.

Jensen Huang:

Right. So one thing is clear: every job will be changed; some jobs will disappear; but at the same time, many new jobs will be created. And I want to say to those young people who just graduated and feel anxious about AI: go become the person who uses AI the best.

Today, we all hope that employees can become truly proficient in AI, and this is certainly not an easy task. You need to know how to ask for requirements, but you cannot make the instructions too rigid; you need to leave enough space for AI to innovate and create under our guidance; and you need to lead it to the results we truly want. All of this requires a kind of "art."

David Sacks:

When you were at Stanford, your famous advice to young people was: "I wish you pain and suffering." Do you remember that?

Jason Calacanis:

That was classic.

David Sacks:

So what about today? If a person is about to graduate from high school, standing at the crossroads of life, whether to go to college, what major to study, or even whether to go to college at all, what would you advise them?

Jensen Huang:

I still believe that deep science, deep mathematics, and language skills are very important. And you all know that language itself is actually the programming language of AI, the ultimate programming language. So perhaps people majoring in English will be the most successful in the future.

In summary, my advice is: no matter what kind of education you receive, make sure you are sufficiently professional in using AI.

Speaking of work, I want to add one thing that I hope everyone hears. In the early days of the deep learning revolution, one of the top computer scientists in the world, someone I greatly respect, firmly predicted that computer vision would completely eliminate radiologists. He even advised everyone not to enter the field of radiology.

Ten years later, this prediction is 100% correct on one level: computer vision has indeed been integrated into all radiology devices and platforms worldwide. But the surprising result is that the number of radiologists has not only not decreased but has actually increased, and demand is soaring. The reason is that every job contains two levels: tasks and purposes.

The task of a radiologist is to look at images, but their true purpose is to help doctors treat patients and diagnose diseases. And because imaging examinations can now be done faster, hospitals can perform more scans, which improves medical efficiency and allows patients to enter the diagnosis and treatment process faster. The result is that hospitals have increased revenue by performing more scans and serving more patients.

Jason Calacanis:

Exactly.

Jensen Huang:

So the result is actually positive.

David Friedberg:

And a faster-growing, more productive, and wealthier country can completely place more teachers in classrooms, not fewer teachers.

You will enable each teacher to have the ability to tailor courses for every student in the classroom. This way, they will be stronger, like "bionic people," and the results will be better.

Jensen Huang:

Every student will have AI assistance, but every student still needs excellent teachers.

Jason Calacanis:

This has been fantastic. Jensen, congratulations on your success. This has truly been a particularly positive and uplifting discussion. Thank you very much for taking the time to participate.

David Sacks:

You are the captain this industry needs.

Jason Calacanis:

Indeed. I think you should express the positive side of AI more loudly. There is too much doomsday rhetoric out there.

David Sacks:

And I also think that being able to maintain this humility after achieving such great success, telling everyone "what we are doing is essentially still software," is really healthy. People need to hear this. We have invented new categories and new industries before. We do not need to slide into that kind of panic; that is not helpful.

Jason Calacanis:

And we can choose for ourselves, right? We have autonomy and the ability to act. We can choose how to use it. Well, everyone, see you next time. Thank you all for watching this episode of All-In.

Jensen Huang:

Thank you.

Recent Fundraising

More
$100M Apr 1, 2025
$1B Mar 20
-- Mar 20

New Tokens

More
edgeX EDGE
Mar 19
Mar 18
Mar 18

Latest Updates on 𝕏

More
Solana Followed YO
Mar 20
Mar 20