Nvidia founder on leadership, virtual reality, AI and simulating the laws of physics
- 29 March, 2018 11:30
Nvidia's Jensen Huang (right) with Adobe CEO, Shantanu Narayen
There’s no alternative for leaders looking to transform their businesses than to roll up their sleeves and try to understand the implications of new dynamics and technologies on their industries.
That’s the view of Nvidia founder, president and CEO, Jensen Huang, who took to the stage at the Adobe Summit in Las Vegas following news of a strategic partnership between the two technology companies.
The pair’s new strategic partnership focuses on enhancing their respective artificial intelligence (AI) and deep learning technologies and encompasses plans to optimise Adobe Sensei, the vendor’s AI engine, for Nvidia’s gaming processing units (GPUs). They’re also focused on improving the performance of Sensei services across Adobe’s cloud offerings, including Experience Cloud and Creative Cloud.
Speaking about the business changes Nvidia has had to make over the course of its 25-year history as a business, Huang said the key to harnessing change has been rolling up his sleeves and trying to understand the implications of the new dynamics of the industry yourself as a leader.
“We have to understand the implications of technology, and dynamics of the industry, intuitively,” he told attendees. “Then you have to invite people to work with you and be team mates to learn how to apply this new tech or repivot the company into new dynamic.
“You have to do it step-by-step. There’s no alternative to leaders being in the kitchen.”
During his presentation, Huang also highlighted a number of significant innovations and areas of focus for the Nvidia business as the graphics business continues to strive to recreate reality in the computing realm.
“Computer graphics has been one of the greatest challenges of computer sciences – to recreate reality,” he said. “25 years ago, when we founded Nvidia, Windows 3.1 had just come out, and 3D graphics were not available on PCs. We said then, one day everyone is going to be a gamer, just like one day, everyone is going to be a creator.
“If we could figure out a way to make personal computer into a 3D graphics workstation for consumes, and create these virtual reality environments for people, two things will happen: One, we’ll enable a massive market for computer games… secondly, computer graphics, because it’s so computationally difficult, could be the driving force for the future of computing.”
Since that time, Nvidia has made two large changes to the business focus as it set to compete against the larger tech “giants” and keep up with the rapid change occurring across the industry, Huang said.
“Firstly, more than graphics, we expanded the aperture of our computing to simulate the laws of physics,” he explained. “In order to create reality, you have to do this. That expansion of our aperture allowed us to go into all kinds new fields like scientific computing, which led to artificial intelligence.”
The second pivot was observing a new model of software, deep learning/AI, was going to change the way software was developed, Huang said in a way that no humans could write manually.
Computing is on the cusp of a significant paradigm shift once more because to ‘ray tracing’, Huang said. Ray tracing is a rendering technique for simulating the path light rays take as bounce off hard objects around the world.
“The graphics model we dreamt of was about simulating light rays as it bounces through the world. This form of graphics was only possible in the film industry because they use supercomputers to create every frame,” Huang explained. “One frame takes 10 hours to render it. But we want to do this in real time.”
Nvidia is now at a point of changing the architecture and algorithms with the launch of RTX-based graphics product for real-time ray tracing last week.
“It’s just a huge breakthrough,” Huang said.
The visionary leader also touched on the subject of virtual reality and augmented reality, noting VR provides a wormhole for humans to travel into virtual worlds, while AR is a way “for intelligent agents in virtual worlds to wormhole into us”.
Nvidia is exploring this through Project Wakanda, which saw the company create a holodeck. Inside it, you could create a virtual car. Once inside that, the user can teleport into an autonomous vehicle anywhere on the planet, Huang said.
“Then my mind, and this autonomous vehicle, becomes one,” he said. “I can drive this car, wherever it is, from my home. All of a sudden, VR is our way of communicating with the future of AI.
“This communications system – today we call it virtual reality and we think about headset displays, or AR as graphics sitting on our table. But in the future, you’ll create experiences with the tools you have at Adobe, that allows people to create this enormous number of virtual realities.”
- Nadia Cameron travelled to Adobe Summit as a guest of Adobe.