I think it's important to talk about what DLSS 5 is and is not doing. It's not using image generators, like those you know of from ChatGPT or Google or Adobe etc. It also isn't trained on images from across the web, so if you consider that kind of scraping to be "stealing" it isn't doing that either. It isn't replacing any human artwork in the game or imagining things that aren't there. There's no hallucination.
What it's "generating" is a new simulation of light on top of the existing human made artwork. All of the same models, texture information, core lighting set up, and color grading are maintained. The difference you see is similar to if you have no RTX enabled whatsoever, and the same game with full path tracing. In an example like that you'd see a massive difference, but not because the artwork itself is different, but because of how light interacts with that artwork. That's what's happening here.
The main thing I think people are mistaking as image generation / hallucination is what I would call micro detail self shadowing. Things like wrinkles or small bumps in the texture of a surface. That stuff can't normally cast self shadows in modern game engines. It's typically just rendered as normal maps, changing the direction of light that gets reflected. The AI can see those normal maps, and "understand" that those normal maps represent micro geometry. Because it has that understanding, it can cast tiny shadows on the surface that cause this detail to stand out more. Like the creases in her lips, or in the way eye sockets have more depth to them, etc. This isn't new detail. It's already there in the model. The AI just brings it out in a way you normally can only get with offline hollywood level CGI renders.
The other of course is better subsurface scattering. SSS is a very complex thing and games have largely faked this in very simple ways. The AI understands the way light penetration works better than modern game engines can really simulate. This can cause skin to "glow" more than you'd normally have, or in combination with the micro self shadowing, cause much more natural looking depth to a character's hair. Then it's also simulating the effects of path traced light bounces, bringing all of this into a much more "physically based lighting" appearance.
The training data will be generated by Nvidia themselves or provided by other game developers. Basically, the model has two sets of data:
The first set of data will look like game engine renders, perhaps with lower settings. It will have color information, a depth map, normal maps, motion vectors, all of the basic lighting information and color grading, etc. It will look like a game running on lower settings with no fancy effects on top.
The second set of data will be the EXACT SAME scenes, same models, same artistic direction. The difference will be that these scenes are not rendered by a real time game engine, but by an offline renderer with full polygonal detail and high quality path tracing effects.
The AI learns to take the first set of data, and "translate" it to the second set of data. Because both sets of data contain the same models and same artwork, just rendered with different levels of technology, the underlying artistic direction is preserved. The only thing that has changed is the way light interacts with materials, bringing the "game engine" version of the image much closer to the "hollywood cgi" version of the image, without needing as much horsepower to do so.
I really hope Nvidia and Digital Foundry do a good job explaining this process in a way that makes it very clear that this is not the kind of "Generative AI" people are used to. It's not an image generator or AI filter. It's not replacing human artwork at all. I think the technology is fascinating, but it has been bombarded with a vast misinterpretation of what it's actually doing that needs clearing up if they don't want it to fail.
@Dort certainly, people being toxic towards Rich, Oliver and their opinions is not something that's acceptable. But as you said, there is no shortage of folks pointing out the similarities between what DLSS 5 is doing and the output of AI-slop tools/filters, and it certainly causes some confusion when such a thing is as obvious and as jarring as it is yet doesn't get called out in the slightest. Plus, it seems Nvidia themselves have also mentioned that the changes introduced by DLSS 5 go beyond lighting, and that the intended effect is indeed to fundamentally change models and textures. Would hope DF does put out a more in-depth conversation and analysis on what DLSS 5 is actually doing, as these pieces have so far been very poor in details and questionable with regards to the results the tool presents.
I mean, not that this needed any clarification, the results speak for themselves. It's not just changing lighting to bring existing detail out, it's generating said detail based on whatever image generating model they have it plugged to.
@Hustler_One that describes exactly what I was talking about, looking at the detail already present in normal maps, and interpreting that detail's effect on lighting as if that detail were micro geometry. It doesn't mean inventing something that isn't there. It doesn't mean imagining or hallucinating. And the term "generative AI" doesn't mean image generation either. That seems to be a common misunderstanding. Nothing of the original model or texture is removed or replaced. What they're doing is taking a low poly model with normal maps and "undoing" the process that was used to create that low poly model in the first place, interpreting it as an extremely high poly model instead. Specifically for the purposes of the detail that already exists in the normal maps, not in the sense of creating geometry out of nothing.
It's not creating new detail, and there isn't an image generation model being used here.
It's really the homogenization of character rendering that annoys me the most. There's a whole bunch of ethical problems here as basically training data sets will set standards as to how a human should look like. And it has a very narrow definition as, across all faces showcased, you can definitely recognize the DLSS 5 signature look if you approach this in good faith. We basically get small variations of the same eyes, noses and mouths.
It might be an indicator that the whole look, lighting and environments of the games as well will be homogenized with this technology. Also, it is stable from frame to frame but it is not consistent from scene to scene, probably depending on how much the face is zoomed in and the whole environment around it. We have two different depictions of Grace with DLSS 5 On for example that look like two different persons. I also take issue with the hero lighting.
On the plus side, it is more photorealistic indeed. It reminds me of these GTA V and Cyberpunk 2077 mods with cooler colors. I was never a fan because I always thought the devs' original version had more character and personality. But it might be interesting in some games like simulations where you would want to push for photorealism.
It's not creating new detail, and there isn't an image generation model being used here.
It most definitely is generating detail which didn't exist before in real time, the results show this clearly. And it's doing so by way of how it interprets what lips, skins and other materials should look like, akin to image generation. Anyway, I'm more than happy to agree to disagree with you, but when the tech is described as "DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI", it's very disingenuous to argue that it's not generating detail that didn't exist before or that the tweaks it results in are just in service of more realistic lighting.
I just hope DF shows integrity and stands their ground. Seeing tech/games youtubers like Daniel Owen posting an optimistic video right after DF and then within a matter of hours quickly release a new video of him doing mental gymnastics to quickly align himself with the frenzied masses was embarrassing to watch. His latest video now even shows a thumbnail mocking Jensen. What a spineless coward that dude is.
It's not creating new detail, and there isn't an image generation model being used here.
It most definitely is generating detail which didn't exist before in real time, the results show this clearly. And it's doing so by way of how it interprets what lips, skins and other materials should look like, akin to image generation. Anyway, I'm more than happy to agree to disagree with you, but when the tech is described as "DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI", it's very disingenuous to argue that it's not generating detail that didn't exist before or that the tweaks it results in are just in service of more realistic lighting.
No. I explained what you're misinterpreting as "generated detail". I understand why you think it is, but if you actually zoom in and compare side to side, there's no detail in the DLSS5 image that wasn't there in the original. It's just shaded differently.
It's very specifically the simulated micro geometry self shadowing that exaggerates detail that was previously only in normal map form. Things like wrinkles and surface bumps and creases. These things were previously on a flat surface with surface normals only slightly varying in brightness based on the direction of lighting. The AI model can understand these normal maps as actually representing micro geometry, and generate micro self shadowing that improves the definition of that detail. Normally you would need a much higher poly model with extremely fine grained raytracing to achieve that level of precision in self shadowing, but with this AI you don't need that.
So it can look like new details, but those details were already there, just faint and not shaded correctly.
No. I explained what you're misinterpreting as "generated detail". I understand why you think it is, but if you actually zoom in and compare side to side, there's no detail in the DLSS5 image that wasn't there in the original. It's just shaded differently.
Correct. If you watch just the early few seconds of this video from 0:37 sec up to around the 2:00 mark your point is illustrated perfectly:
@AndyGilleand I mean, at this point you're arguing with what Nvidia is saying, not me. Also, I have no idea how a tool that only "exaggerates detail" can result in making characters look like they put on make-up that wasn't there before or remove most of the contrast out of a landscape view, unless said tool is actually doing what Nvidia said it did, which is generating new detail.
All AI is built on the same core technology (which in my humble opinion is not intelligent, just good at recognizing patterns). Per the word of Nvidia itself and its CEO, this is generative AI, only anchored by the structure of the 3D content. What it is generating is lighting and the way materials interact with that lighting. But it does not need to generate geometry to dramatically alter the perception of the image, in ways that are not always fortunate to say the least.
Nvidia talks a lot about how the developers have direct control over this but I'd be curious to know exactly what kind of control. It seems to be just a bunch of sliders and maybe the possibility to select where it applies on the image. But once it's on, it's on and it does its thing and they're far from having total creative control on the final image. Plus if it ends up being only 5070+ GPUs that can run it, they're not going to bother too much.
@AndyGilleand I mean, at this point you're arguing with what Nvidia is saying, not me. Also, I have no idea how a tool that only "exaggerates detail" can result in making characters look like they put on make-up that wasn't there before or remove most of the contrast out of a landscape view, unless said tool is actually doing what Nvidia said it did, which is generating new detail.
Nvidia has not said any of the models or textures are changed. They've said the opposite, that all of the original artwork is preserved. I've explained why at first glance it can seem like geometry has changed, when in reality it's simply a more physically accurate way to shade the same geometry that gives it that appearance. Everything I've said about the adding of self shadowing explains what you are interpreting as makeup, and added simulation of path traced lighting explains what you have described as a reduction in contrast in the environment. This isn't true for everything, it depends on the environment for how light will bounce. In some scenes it will become more contrasty.
The "new detail" you see isn't actually new detail, nor has Nvidia said that it's doing that. It's simply a better understanding and shading of the existing texture detail. See the video @NetshadeX posted above for a great example showing that.
Well, I know what my eyes see, and as I said before, agree to disagree. Weird how the right lighting conditions can include lipstick to people, but I guess that's just the brave new world that generative AI has in store.
Well, I know what my eyes see, and as I said before, agree to disagree. Weird how the right lighting conditions can include lipstick to people, but I guess that's just the brave new world that generative AI has in store.
Improved subsurface scattering, better specularity, and the increase in micro self shadowing made the lips more prominent and the detail in them more visible, but the color and shape of the lips is exactly the same as before, with the detail being an enhancement of the texture that was already there.
Seems a lot of developers are also convinced that DLSS 5 is generating details beyond the assets of the actual games, or maybe their eyes are deceiving them as well? In any case, again it goes to show that the reaction to what Nvidia is showing is divisive and should be covered with a lot of scrutiny:
Strangely, Alex himself references how DLSS 5 is affecting details which are not from the models/textures but most likely from some external dataset (and he directly references the make-up issue). But some posters here would have him check his own eyes first 🤡
Edit: at the very least, it's nice to see Rich and Oliver being more measured and thoughtful about DLSS 5, as the initial coverage was really shallow/gushing.
Anyone else suspect that this tech was originally meant to be unveiled with the 60xx cards, as the killer feature to get 50xx users upgrading?
...but the delay in that launch, now almost certainly to late 2027, made them decide to launch early, as a way of boosting sales in a extended, 3yr GPU cycle.
I cant wait for dlss 5. I think its potential is insane. There was an article on another site where an artit showed the huge jump it had to light a character with path tracing compared to raster. Just like how Alex showed Grace's face in his video about the latest RE game, where her face , to me, looked so weird, but so much more natural when lit properly.
Artists gets control over this, and a lot of ways to tweak both the look of the shading and the lighting. Thats huge.
Don't we all by now realize that we cant get photo realistic graphics in games with brute force? Anything that gives an artist a better tool to realize their creative intent is in my book a win. DLSS is a win. Framegen is a win.
I am not a fanboy of any company, but i support any company that pushes the boundary and provides tools that they dont force on anyone. I approve of the pushing forward of new ways to push the limits of what you can experience graphically when playing games. For me personally, this made me feel exactly as when i saw how big an impact 3Dfx has when i turned the OpenGl wrapper on, a long, long time ago.
Forums
Topic: DF's coverage of DLSS 5
Posts 21 to 40 of 40
Please login or sign up to reply to this topic