CES 2019: Nvidia explains how the AI changes everything you know about graphics

Published : 01/11/2019 10:55:01
Categories : News APY Europe

CES 2019: Nvidia explains how the AI changes everything you know about graphics rendering

Article complet de zdnet.fr par Rayon Tiernan

Nvidia CEO, Jensen Huang, took to the stage in Las Vegas on Sunday night to say that AI, especially deep learning, is fundamentally changing the way his company allows users to create new ideas. images more real than life.

His idea ? The traditional graphic pipeline is giving way to neural network approaches, accelerated by new integrated circuits, so that physical simulation and sampling of real-world details take precedence over the traditional practice of painting polygons in the future. screen to simulate objects and their environment.

Jensen Huang pointed out how many graphic representations are still quite basic, saying that "over the past 15 years, technology has evolved enormously, but still looks like a cartoon".

At the heart of computer graphics techniques is today the screening process, whereby objects are rendered as collections or triangles. But it's difficult to convincingly use frames to make complex shades of light and shadow, Jensen Huang pointed out.

Three things, he said, have disappeared: "The reflections are not correct, the shadows are not right and the refractions are really hard to do," said Jensen Huang. To remedy this, one of the technologies put forward by the company is ray tracing, where the computer models the physics of photons interacting with the world.

"It's hard to simulate the effects of light from geometry," that is, trying to "paint" the light on all raster triangles, says Jensen Huang. Trying to "cook" the light in these triangles did not work very well despite ingenious attempts. Instead, "you must start from the light, tracing the light of your eyes to the world." This ray tracing technology has been around for decades, but has not progressed fast enough to create real-time lighting effects. "It took ten years to find out how to do ray tracing fast enough," said Jensen Huang, "and that would not have been possible without deep learning.

For ray tracing to make stunning effects such as shadows and shadows, shade shadows, reflections on glass and water, the workload is distributed between the physical model and a neural network approach that the company calls "deep learning super sample," or "DLSS - deep learning super sample." Nvidia said the approach uses a kind of neural network autodenceur to deduce sixty-four samples per pixel in each pixel of each image in a training set of images rendered By performing this sampling, the network learns to apply an anti-aliasing to the images.

"The DLSS predicts the perfect pixel, it takes an image in low resolution and outputs a high resolution."

This AI addiction for rendering support is something that ZDNet pointed out in an interview last month with Nvidia's Bryan Catanzaro, who is responsible for applied research on deep learning in business. Increasingly, "model-based" programming is taking precedence over traditional graphics programming, that is, using a neural network model to infer the appearance of scenes rather than a person. programming the rules of the polygons to assemble the scene.

The combination of ray tracing and DLSS is a form of hybrid computing, which the company has named "RTX". While the DLSS is driven on a supercomputer composed of chips "DGX2" of the company, the real-time rendering is performed in the client device by accelerator cards in plug-in. An example of this architecture is the "GeForce RTX 2060" card, a new plug-in card that Jensen Huang unveiled at the show and will be sold this month for $ 349.

Model 2060 divides the work of ray tracing and DLSS into two distinct types of processor elements in the GPU, one being the "RT" cores for ray tracing, the other being the "tensor" cores that perform the DLSS inference job that fills the images. The company argues that RTX technology is able to achieve optimal balance in both forms of computation, in addition to standard dithering work, to obtain better images without slowing the rate of images sent to the screen.

Jensen Huang presented several games that will be delivered this year that will benefit from this split-computing approach, including EA's Battlefield V, BioWare's Anthem, and the Mundfish Studio's first-person shooter Atomic Heart (Moscow) .

He drew the public's attention to the details of each of the games, punctuating his speech with frequent reminders that "it's not a movie", which means that the pictures are rendered in real time between RT and tensor circuits.

processeur EPYC 7nm

Share this content

Add a comment

 (with http://)