It's been a wild ride over the last few days since we first posted about Nvidia's DLSS 5 and it's become clear that the issues raised by machine learning in this guise should have given us pause before we went live with our coverage. We posted too quickly when we needed time to process everything we'd seen.
The community has concerns, and developers have raised various issues with us privately. While it has been clear for a while that the future of next-generation graphics technology will have a strong ML component, the question is whether DLSS 5 represents the next big evolution in games technology or if it crosses a line in terms of artistic integrity - especially since the demos we saw were running on what is effectively existing "art", sometimes with radical differences.
We were excited by what we saw in our private demo and the scale and ambition in Nvidia's technology is astonishing. In effect, it has created a video-to-video generative AI solution unlike any other. It doesn't have access to original game assets, geometry, depth or per-material metadata, yet it's still able to produce images with remarkable precision and temporal coherence.
Putting the facial rendering aside for a moment, scenes with under-exposed characters, flat materials or weak contact shadows gain believable depth and dimensionality. Plausible reflections, shadows and material response do take us one step closer to the offline rendered look. Hair rendering is also impressive. Processing path-traced strand lighting is massively expensive for the GPU, but in DLSS 5 you can see hair that looks more natural and comparable to real world photography. Here's our screenshot zoomer as a reminder.
At its absolute best, DLSS 5 hints at a future where a neural network with an understanding of how light behaves and how cameras work could complete some kind of neurally rendered "finishing pass" on game visuals. So, in that sense, there's definite potential here to see technology like this as a powerful tool for both new and existing games.
But this is more than a finishing pass - for good or bad, it's transformative. The question is whether DLSS 5 is more intrusive than it should be. That's where the criticism of the technology's facial processing comes into focus. On close-ups, there's plenty of geometric and shading information to guide it, leading to plausible changes - enhanced skin textures and improved depth. However, if the camera pulls back, there are fewer hard cues for the model to work with, leading to what could be called a more "speculative" interpretation - and perhaps the cause behind the massively controversial Grace image that fronted the DLSS 5 reveal. This is not good when dealing with familiar game characters who can look like they've had a face transplant. Are we looking at an output defined more by the model than the game data? If so, that shouldn't be happening.
It may even raise consent and other questions surrounding artistic integrity. On site and witnessing the demos in motion, concerns about this seemed less of a problem when the games we saw had been signed off by the studios that made them - the contentious assets we've seen, likewise. Nothing from the DLSS 5 reveal released by Nvidia has not been approved by the studios that own those games. But perhaps the issue isn't just about specific approvals by specific developers on agreed DLSS 5 integrations, but rather the whole concept of a GPU reinterpreting game visuals according to a neural model that has its own ideas about what photo-realism should look like.
While we've seen endorsements from Bethesda's Todd Howard and Capcom's Jun Takeuchi, to what extent does that consent apply to the entire development team and other artists associated with the production? And by extension, there is also the question of whether now is the right time to launch DLSS 5 at a time when the games industry is under enormous pressure, jobs are on the line and cost-cutting is a major focus in the triple-A space. The technology itself cannot function without the work of game creators - it needs final game imagery to work at all - but the extent to which it could be viewed as a worrying sign of "things to come" cannot be overstated bearing in mind the reactions elsewhere to generative AI.
Right now at least, the concerns surrounding DLSS 5 can be tempered by practical realities. By its very nature, the technology can't work on every target device - only Nvidia GPUs, and likely only high-end Nvidia GPUs, based on the current dual RTX 5090 set-up. A standard rendered version of the game will be required for every other gaming hardware scenario. And a standard rendered version of the game is required for DLSS 5 to function too. So in that sense, it presents more like an advanced post-processing mod as opposed to a mandatory component of the game engine. Developers can choose not to support it. And gamers may not wish to use it, even if they do have the available hardware as there will inevitably be a performance penalty.
But we're fairly sure that the die is cast and the direction of travel is clear. It's been years since we spoke to Intel's Tom Petersen in Berlin when XeSS was revealed and he was talking about something that sounds a lot like DLSS 5 in that interview. And looking at the hardware building blocks in RDNA 5, it's all about increased fidelity via ray tracing and a move away from boosting standard shaders without exception towards a more balanced GPU set-up involving ML hardware. Neural rendering - in one way or another - is coming.
DLSS 5 as it stands is an astonishing piece of technology - but also the start of the big debate about the importance of machine learning in the next generation of games, where the conversation must include some kind of consensus on training models, the source of the data for those models, control of the outputs and some kind of answer to the authorship question. I suspect the ultimate answer is a game engine with heavy machine learning assistance, but with significant control from the development team. DLSS 5? Perhaps more of a first generation ML image processor as opposed to the ultimate solution - but certainly the catalyst for discussion on the future of generative AI in gaming.





Comments 34
One question Rich. When you say "I think it would have been prudent to wait to see the reaction from audience and developers to the keynote", I am not sure this is the right way. Surely you have to say it as you see it and not tell people only what they want to hear, right?
More than anything I hope you are all doing OK. Sadly there's too many unhinged people out there and while I don't like what DLSS 5 is doing much it's no reason to react the way some have reacted. Wishing you all well.
Imagine being AMD, years behind sure, but learning from NVidia's mistakes at zero cost. Nice work if you can get it.
Sorry second question. I agree that if they had run with environmental only and not faces it likely would have been better perceived. However I notice in almost all the environmental shots the shadows are often toned down or almost disappear almost as if the time of day or light intensity has changed. How do you feel about that aspect? While SOME of the environmental stuff looked great, I couldn't get past it looking more like some Lightroom preset.
@themightyant I had the same complaint about the environment. I can only describe it as the pseudo HDR photos that phones take that are multiple exposure shots put together in a way that looks nice on a screen but mostly nit accurate.
From what I understand the DLSS training data to this point has been of ultra high res imagery of games so that it can accurate upres low res to high res, it's not creating anything that isn't there, which is why you can do comparisons between native and DLSS. But any generative AI is leaving the model to assume what stuff should look like, and I assume the less photorealistic the worse the results.
I'm happy you guys acknowledged the feedback. I can totally get being excited for new tech and not really thinking it through, but IMO even at first glance it was problematic.
@themightyant You're quite right, of course. However, we were dimly aware that there would be controversies about this. Oliver mentioned the "Grace's face" issue in the video and I think I mentioned something about how we were still processing the demo. Waiting would have allowed us to understand the concerns around the tech and to crystalise our thinking. This wouldn't have changed much about what we said about we saw but it would have allowed us time to make a better video for the audience and to factor in the developer concerns that arose afterwards.
This is a big moment for the future of games and being more aware of that would have made for a better video.
@themightyant Could be a tone-mapping issue. There are many scenes where occlusion is much more pronounced where you think it should be.
@TemsOrrough I don't think Nvidia feel they've made a mistake and they have plenty of time for minor or even major pivots.
Many thanks for your reaction, for this new video and new article.
All the best for you and the DF team, and keep on your excellent work !
@Rich_Leadbetter My point was that it is (hopefully) also a teaching moment to other companies.
When I saw DLSS 5 for the first time, I immediately had concerns so I didn't share your enthusiasm shared in your first reaction video. But at the end of the day, we can disagree and everyone is entitled to his opinion. That doesn't undermine your content for me and I'll keep enjoying DF. People get overly aggressive on this kind of topic and I don't support those who questioned your expertize simply because they did not agree with what was said.
Nice piece Richard, hope the whole team is well and getting on. I'm excited by this technology and I shared your initial enthusiasm and feel educated about the other concerns raised by your second take. As I've said in other comments, internet trolls are always looking for drama and this will all blow over in a few days. The way I see it is that there's a community of (mainly) blokes all talking about video games - I think the core audience can have a level take on this. The people with time to be whiny trolling bitches about it are probably 17 and have no more depth to their personalities than what they're told to think by the algorithm. Take a cortisol break this weekend, and as I've said before if you fancy a pint I'm local!
@Rich_Leadbetter Thanks. And re: second question I commented before seeing the end of the video (slaps wrist) and you do cover this a little, albeit with faces, where you cover tweaked tonemaps and Alix talked about quasi HDR photography etc. Either way it looked all wrong to my eyes! Wishing you all well, appreciate all you do.
@Granadico Yes exactly this. I remember when phone photo editing apps first popped up (e.g. Snapseed - pre-Google buyout) and I would massively over saturate or bring out certain details, overusing HDR in particular. The image was certainly punchy and to the untrained eye might look like an upgrade - it's a stylistic choice at least. But now I'm more aware of how to use and balance lighting, tone mapping etc. my 2015 edits look bold, but amateurish, this heavily reminder me of that.
They do actually cover some of this around 33:50 in the video and Alex talks about HDR photography etc. and someone has fixed the tone-mapping the images don't look as bad, or as starkly different.
While I have my issues with the tech as it is now, I have no doubt neural rendering will play a part in the future of games and I look forward to seeing where this tech ends up.
It feels like a large group of people sincerely do not notice a genAI aesthetic when it's present. I can't help but feel it's the immediate thing that would jump out, regardless of what is said about the tech. But, obviously, it does not present that way to everyone. Some folks have to take a longer look to start noticing things like added makeup, buccal fat removal, and other "beautification filter" aesthetics being overlaid on faces. As Alex said in the video, everything looks to be in a "studio-lighting" setting, even beyond the faces.
In the same way some folks are less sensitive to things like poor frame-pacing, input lag, poorly implemented motion-blur, some folks will not be able to notice when genAI has hijacked the visual language. There is also the case where some do enjoy that genAI look and are not opposed to it.
This just doesn't feel like DLSS and feels far too easy to trample on the art departments intention and very likely will disrupt the visual language of many games, at least in this current form. This feels more like an optional mod.
@Rich_Leadbetter This new article is spot on! I hope the team didn't get disheartened by the extremely intense reactions of the first video/article. I think you missed the forest ( what meant Grace forced aesthetic surgery) for the tree (the cohesive light on the lamppost behind her), because I suspect the tech demo on hands/in motion feels a lot more impressive that the on/off stills that make it look like an Insta filter tech -that hangered the crowd-. I really love DF work because it balances thoughtful, respectful, technically ultra competent analysis with a very charming nerdy enthusiast about gaming tech, and games.
In that first video the later was a bit exceeding the former. I still liked it as it is (even if Grace face really bothered me) as it was true DF work for me.
Keep up the AMAZING work Rich and team, in my view your voice is an essential part of the gaming community.
Easier said than done, but I wouldn't let the overall discourse in the gaming "journalism" community color your commentary. Much of what's been written in other publications has been piling on and knee-jerk opposition to anything AI. This is one of the few venues with true integrity and expertise.
@Rich_Leadbetter sorry for the social media hate but as a (lower-level) politician in Canada, I can at least empathize.
Aside from the near cult-like anti-AI contingent, the other factor here is the spillover from folks angry at Nvidia's shift away from PC graphics to AI, and the implications associated (high GPU/DDR costs.) I come here more for the technical expertise and less the ethical implications... but I appreciate the reflection. I think Alex's observation on how the character distance from the camera plays a role in the ML-output is spot-on and will hopefully get resolved pre DLSS 5 launch.
This close up (linked) comparison from Nvidia looks dramatically less drastic than the (further-camera) street view IMO.
https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelity-for-games/nvidia-dlss-5-resident-evil-requiem-geforce-rtx-comparison-screenshot-002/
You guys have nothing to apologize for. A mob of crybullies should never be appeased. I appreciate that your thoughts can settle with time and reflection, but I’m not hearing many substantive arguments against the technology aside from “AI Slop” and memes. Nvidia will eventually understand the consensus of their customers and make adjustments - but let’s not pretend that this isn’t impressive - potentially seminal technology. Let’s try and think past Grace’s face.
@themightyant I obviously can't read Rich's thoughts or put words in his mouth, but I did not interpret his comment as "we should have waited to see the reaction so we could present an inauthentic opinion that aligns more with audience expectations." Maybe it is just my interpretation - who knows - but I took it more as "it would have been prudent to wait a bit before posting so we could have a chance to reflect on what we saw and review the materials without the buzz of having walked straight out of the demo, get input from other team members like Alex has in this video, take into consideration feedback we received in private from devs and their impressions, etc." Basically just recognising that given the charged emotional environment around anything AI-related atm, it would've been more beneficial to "lead" with the more nuanced piece vs DF's traditional approach of "these are literally our initial impressions having just seen the tech demo, we will bring you a much more detailed analysis in the coming days."
Personally I didn't really have any issues with the initial impressions video as a long time DF watcher because I'm so used to exactly this cadence - initial impressions of things recorded on-site at the trade show/hotel room, then a proper in-depth analysis later on when the team is back home. But for whatever reason - whether it's because of the emotions around AI, viewers not being familiar with the different types of DF content, just the depressing degeneration of our ability to interact and engage with different opinions in good faith online - it seems that a lot of viewers interpreted that initial impressions video as "here's a comprehensive Digital Foundry review of this new technology and we wholeheartedly endorse it."
I can't really guess as to why that is - because that certainly wasn't my impression of the video - but my sense is that Rich's comment re: prudent to wait is more that they were able to include context and nuance in this follow-up video that would have been useful to have in the initial video - not that they want to adjust their editorial direction or honest opinions based on "mob rule."
@Kashmir74 +1 to your comment mate. Unfortunately this isn't restricted to video games, but it's depressing the degree to which differing opinions seem to justify immediate hostility these days, as opposed to being a normal fact of life :/
@Rich_Leadbetter What is your reasoning behind the claims that DLSS5 improves lighting or that it specifically is a leap forward in realistic rendering for environments?
For context, I know a modern game with a principled approach to physically based rendering takes the direction and authored material properties of a surface, alongside the colour, angle of approach, and strength of the light interacting with it, and uses that information in calculations designed to create outputs that match measured data of how real-world materials interact with light within as small a margin of error as plausible.
It seems DLSS5 can only take an image and alter it to visually match what is seen in a data-set of imagery that may not reflect anything seen within the game's world, let alone the specific lighting conditions. It can be seen outright removing shadows cast by the sun from a character's face in Starfield in favour of lighting that resembles photography that uses very specific lighting to highlight a subject and, while you've discussed issues with faces specifically, that seems to be a bit more than just an issue with the details of faces alone. Similar issues seem to be noticeable in environments too and it certainly isn't just tone-mapping. These are massive problems for the plausibility of lighting alone and that's before considering the fact that we can't ensure objective components of accurate lighting like energy conservation/preservation are present in DLSS 5's output and that it can't access information about material properties that could help it account for that.
Was any of the above considered during your reporting?
The possibility of machine learning having applications that improve the accuracy of modern lighting and rendering techniques is promising but the actual implementation of machine learning within DLSS5 seems rather useless in the pursuit of rendering that is accurate to the real-world.
@alpha54 I don’t really want to get into the weeds, but broadly I agree with most of what you say. I presented Rich’s own words back to him to give him the opportunity to clarify what he meant, as it could easily be misconstrued… and he has. (See above)
I agree I only ever saw their initial video as a first take - I can't comment on what anyone else thought. But I did think it was a little over enthusiastic and glossed over issues that many of us, including seemingly Alex and John, saw instantly. For example they only mentioned AI once in the whole video, about 10 minutes in. That all came across as a little too giddy for me. Whereas it was literally my first thought.
But that's fine, we don't all have to agree, as Alex said we don’t have to have consensus. It's a real shame far too many can't make their thoughts and complaints rationally and calmly without shooting the messengers. Sadly, as I said in my comments, far too many unhinged people out there, who need to do more than just touch grass.
@CantThinkOfAUsername I believe a lot of that is discussed in the Q+A video. I guess the concept of "better" is a subjective one, but if you're seeing plausible reflections and contact shadows that didn't exist in the prior image, that might be considered "better". If you fed Minecraft to DLSS 5 you would get something approximating Minecraft RTX - ie adding RT and PT like features you would not see. Better? Up to you I guess?
I do take on board your comments and think they are represented in the Q+A video?
@Rich_Leadbetter I don't recall hearing anything specifc about DLSS 5's ability to maintain objective physical accuracy that may be present in the underlying rendering, nor discussions on whether a post-processing approach to enhancing the accuracy of lighting is a valid one, within the Q&A. If they were present and I misinterpreted or otherwise missed it, I apologise and if I didn't clearly convey those as the point of my comment then I apologise for that too.
Regardless of that, it's good to see you and your colleagues respond to the criticism well and openly learn from it, despite all the harassment it was mixed in with. It must not have been easy to keep a level head in that situation.
"it doesn't have access to original game assets, geometry, depth or per-material metadata,". Surely they should work on integrating this and it would solve a lot of the potential issues.
@themightyant they reacted right (or soon) after a hands-on event where they spent a few hours hands-on with the technology, sitting behind the controls, and talking with amongst others the people that created it. That gives a whole different vibe and atmosphere than me sitting here behind my desk seeing a 1 minute media trailer from nvidia on youtube.
I can understand their initial reaction, and i also understand that in hindsight it would have been better to have some more time to reflect on the experience before reacting.
@Rich_Leadbetter If a similar situation happens in the future (i.e. NVIDIA previews DLSS 5.5) how would you balance the need to release timely content vs the difficulties of assembling the team on short notice for reactions?
I thought the Q&A addressed many of my concerns from the initial video. I don't know if waiting for audience & developer reactions is the right takeaway here, because I value DF's editorial takes and I wouldn't want, as a hypothetical example, Oliver's enthusiasm dampened to avoid risk of internet blowback. But I think balancing that enthusiasm with reactions from John & Alex would result in a better discussion & a better video. More than anything, I hope the team is doing okay and that everyone is still gelling after this episode.
Please dont let the negative feedback change the way you guys at DF work.
You follow the tech industry and give your feedback on what you feel about the new developing technologies. That is what made me follow your work ever since i stumpled upon Alex's review of the ray tracing technology being used in Metro Exodus. The passion and the attention to detail and the new technologies being used to improve the immersion in games. That is what i love about DF. I remember so many, what i would call dumb posts, from people who complained about the lighting looking wrong in their games all of a sudden and that they would never use ray tracing. And you guys kept showing up to show people why it mattered and why it was such a big technological leap in graphical fidelity and immersion.
That personality and quality would be lost if you always have to take a few steps back and consider what the "public opinion" of the items you discuss and report on are.
Keep up the good work! And you have my deep felt sympathies for the unfair and inhumane feedback you've had to deal with.
@Synchrotone 100% , I tried to share a similar thought in my comment above, but your post is much better articulated than mine. I love how DF always strive for intellectual honesty vs some more inflammatory outlets like Gamer Nexus.
"How do you know ai is a witch?"
"Well it turned me into a newt! ...I got better."
"A person is smart. People are dumb, panicky, dangerous animals."
Big love to the team.
call a spade a spade
@themightyant No need to get into the weeds at all - I do tend to have a very verbose writing style haha
And I really gotta learn to be more patient and perhaps read the entire comment thread before replying to the first comment I see, bc as you very rightly pointed out, Rich did use the opportunity of your question to respond himself - which I saw and read after replying to you when I continued to read the rest of the comment thread haha 😅
@alpha54 no worries dude. We’re all guilty of that sometimes. You see a comment and you want to reply to it specifically.
Also keep in mind that DLSS 5 is enabled in YouTube comments as well. People see or hear what they want to and build their own "ground truth" from almost no data.
We all see pictures in clouds, right ?
I agree, sending death threats is completely unacceptable and should face severe consequences.
Show Comments
Leave A Comment
Hold on there, you need to login to post a comment...