At its GTC 2026 event, Nvidia has revealed the next generation of DLSS. DLSS 5 isn't a frame-rate, frame generation or performance enhancing technology. Instead, it's Nvidia's attempt to use machine learning to leapfrog generations of GPU hardware evolution to deliver photo-realistic lighting using the hardware of today - and we saw it demonstrated in titles including Resident Evil Requiem, Hogwart's Legacy, Assassin's Creed Shadows, Oblivion Remastered and Starfield. The transformational lighting delivered by DLSS 5 is frankly astonishing and will be coming to RTX 50-series GPUs by "Fall 2026".
Plumbed into game engines in a similar way to DLSS super resolution and frame generation, the DLSS 5 lighting model uses just colour information and motion vectors to deliver photo-realistic imagery. Nvidia says the core goal is to enable developers to achieve the artistic vision desired by game creators limited by the technological limits of today's hardware. While lighting is radically revamped, all geometry, texture assets and materials remain as they were in the original game - but the effect can be astonishing.
The AI network powering DLSS is aware of the semantics of the scenes it processes. It "recognises" and processes elements like skin, hair, water and metal differently to apply photo-realistic lighting effects. While there could be some comparisons to generative AI, DLSS 5 is consistent and coherent in its rendering of the game world, the environments and the characters within it. It's capable of working with standard rasterised games, RT supported titles and path-traced experiences - the higher fidelity you give to the model, the better the end results you'll see from it in terms of material response, lighting and shading.
Character rendering is transformed. You'll see realistic subsurface scattering in skin and more realistic hair rendering. Games like Resident Evil Requiem, Hogwarts Legacy and especially Starfield show generational leaps in fidelity. Meanwhile, environments such as those seen in Assassin's Creed Shadows and Oblivion Remastered see a dramatic boost in the realism of subtle, realistic shadowing and ambient occlusion, grounding objects in a scene in a way the original renderer can't.
Handling of materials can be astonishing, with everything from metals, cloth and even the skin of fruits looking remarkably realistic. Particularly impressive is how DLSS 5 handles light and shadow around foliage - something that's very difficult for standard renderers to achieve, even with RT or path tracing. Right now, DLSS 5 is still a work-in-progress project - we did spot some screen-space errors, but Nvidia describes what we're seeing today as a "snapshot" of the technology as it stands, with further improvements and optimisations to come. It's set to be launched later in 2026 after three years of development at Nvidia.
From our perspective, it's important to see that Nvidia is showcasing DLSS 5 on actual shipping games (and its forward-looking Zorah demo) if only to demonstrate that the technology will only find use in cherry-picked scenarios. This is no Matrix Awakens demo - a tantalising glimpse of a future that still hasn't come to pass. DLSS 5 is the real deal.
There's much we don't know though. That starts with the computational cost of the ML algorithm. Nvidia actually used two RTX 5090s for its demos: one plays the game, the other exclusively runs the DLSS 5 technology. The use of two GPUs is required right now as DLSS 5 still has a long way to go in terms of optimisation - both in terms of performance and its VRAM footprint. However, DLSS 5 is designed for use on a single GPU and that's how it will ship later this year. Quite how scalable it is also remains to be seen, but in common with other DLSS technologies, Nvidia tells us that the computational cost scales with resolution.
Expect to see DLSS 5 as a further option with the graphics menus of PC games, alongside super resolution and frame generation, but the demos we saw were running with 2x frame-gen. In fact, DLSS 5 is integrated into frame-gen, which makes sense - after all, using this lighting technique, every frame is now generated. And yet, the quality is there, with few if any of the inconsistencies and mistakes typically seen within photo-realistic generated AI.
I'm fully anticipating a robust discussion around DLSS 5 and Nvidia's interpretation of photo-realism. In effect, the firm is using its advanced knowledge of machine learning to "bypass" the years of hardware evolution and software development required to deliver photo-realism by using ML to deliver its own vision for the future of graphics - but is Nvidia's interpretation of that future what gamers and developers actually want?
While DLSS 5 has its own model based on relatively limited game engine inputs, Nvidia says that there will be the means by which developers can interact with the technology to get the results they want. On top of that, DLSS 5 isn't a full replacement for current lighting - the algorithm requires game inputs to work and the higher the quality of those inputs (for example, path tracing over ray tracing) improves the end result of the DLSS 5 output.
And of course, if developers or indeed gamers don't like it, there's no compulsion for them to use it. Needless to say though, Nvidia says that feedback from developers has been positive and it already has a significant library of games with pledged support.
There's a lot to process here - and there's the sense that we're still not fully aware of the full implications of this technology - but the bottom line is that this is big. Bigger than the last big jump we saw in gaming graphics with the arrival of path tracing in triple-A games, kicking off with Cyberpunk 2077. And it highlights in the most dramatic way possible that innovation in graphics will come more and more from software rather than diminishing, generational leaps in pure hardware performance.
We'll have more on DLSS 5 soon, including a longer DF Direct on how we came to see the new technology, our immediate reaction to that Resident Evil Requiem comparison, and more detailed thoughts on each of the demos we saw. We're also looking forward to the reaction from our colleagues (in particular Alex!) and we'll be fielding questions about what we saw in the next Digital Foundry Q+A show. But in the meantime, I'd expect to see a range of DLSS 5 coverage from other outlets as Nvidia's GTC event progresses, and I'm very curious about the nature of the reaction the technology will receive from press, developers and gamers alike.





Comments 136
You guys have really been killing it lately with your new content. Great work!
Wow, that looks really, really bad. Girl looks like an entirely different person, like a botched plastic surgery. The others go into uncanny valley territory as well.
I'm sorry guys, but this looks absolutely awful. I cannot believe you are promoting this and I am thoroughly disgusted by this. I know Oliver has always been "Mr. GenAI" but this is just so disappointing.
I'll also add: Rich and Oliver saying that these look "closer to the original artistic vision" just rubs me the wrong way. This is shameful coverage.
What in the AI generated heck.
In some cases this is literally unbelievable, the Forrest in the assassins creed footage looks like real life on my tiny phone screen.
I’m not convinced about the faces yet, I think this has the potential to be generationally transformative, but I suspect it needs to be implemented carefully to not ends up looking AI sloppish.
The faces look completely different from how the ones the artists made. No thanks bro!
OMG the slop chases me into my games now? How do we escape this hellscape?
Hopefully, we will still be able to switch back to any version of DLSS in any game.
I refuse to believe you really think this looks good.
I can't believe you guys are actually praising this, it looks like an AI Slop filter for games.
I was always worried Oliver's obsession with GenAI would eventually lead to some sort of controversy but not like this.
the only one that arguably looks better are the starfield ones, because artless slop on top of artless slop is kind of a wash, but even those they are pretty radical changes to the character design, as uninspiring as todd howard's latest wild ride may be. the rest are absolutely loathsome, and all characters look like different people with DLSS5 on, with different vibes, and i hate it.
can't wait for kirkification of all video games under herr huang's reign
Odd that the "enhanced" version of Grace standing in the street greatly resembles Vanessa Kirby, but in the other "enhanced" screenshot she looks completely different.
I'm not sure what is more worrying, the lack of consistency, the AI slop aesthetic, or DLSS potentially stealing the likeness of actual people and the inevitable legal sh*tshow that spills over to developers and publishers.
No. Grace literally has different lips between the two comparison shots. Either I am too used to baked in old school lighting or some generative AI is being used here to Yassify the models to make them look like what you see AI generates these days when input for "generic good looking woman" comes up with. I do not like how this first look at DLSS5 looks like. Not one bit.
Honestly, I've followed and respected you guys forever.
Supporting this will kill your reputation.
Incredible that you guys clearly don't see the problem here, omg the disappointment hearing you talk about how "impressive" turning the faces that had a previous art direction to a slop prompt: "hey grok make this character ultrarealistic" can't believe it. Please read the comments and acknowledge you are really out of touch.
Wow, this "AI Slop"-mode looks, imo, absolutely horrible.
Hey guys, made an account JUST to come say this sucks. We need to be full throated about our opinions on generative AI, or it’ll continue to seep into everything. Public opinion is the only thing that can realistically slow this down - or the bubble pops, whichever comes first.
What did they do to hot unc' Leon? I can see some aspects of the image being better but it kills artistic intent and it does constantly remind you that you are looking at generative AI which is an uncomfortable feeling and really weird. Like the dlss off is a grounded image and dlss on you're hallucinating on drugs.
Yeah I can't get behind this at all. Almost everything here looks significantly worse with DLSS 5 turned on - its so obviously destroying the original artists' intent, lighting errors all over the place, just a complete nonstarter from me. They just look like Snapchat filters come to life.
Thanks for letting us know about this but wow this is so bad. I get you guys are impressed by the tech but surprised you're saying this looks good.
I don't see how any artist developer would be in support of this either.
With you mentioning this being injected into games that don't support it, could we see less games on PC if this is possible? Only shipping on console to avoid this happening to their artistic vision.
Made an account just to comment here. This is really shameful coverage of DLSS 5. The video and this article fail to mention the many many many implications of this technology being used. You say "oh it only changes the lighting" It really doesn't. Its inherently changing the artistic and creative integrity of games seemingly just as a business decision. It's an AI slop filter and that's all it is. This video is also just a blatant ad too.
Love you guys, but I don't know how you can say with a straight face that this is just lighting. It obviously is not. It's AI adding texture, shading, and color to create something completely different from the original artwork. It's awful, and it is the pure unfiltered definition of "AI slop." For crying out loud, it turned a teenager into 40-year-old man in the shot from Hogwarts Legacy near the end of the video!
This is awesome and clearly the future of gaming. People here are complaining about its looks when people did the exact same thing with DLSS 1. Now its many times better. this is the first appearance of DLSS5's neural rendering. It's going to get much better.
@Samuel-Gamer when you say supporting DLSS 5 will kill their reputation, what do you mean by "supporting" it exactly?
Writing an article on it? Recognizing anything positive at all?
Despite all the hate out there I would choose the dlss 5 version of starfield over the original version every day of the week. And i bet most people will do the same in the end. Also if you prefer the old poly characters just dont turn on dlss 5? Why are people so upset. Options are nice to have.
The way it transforms Grace's face into generic "beauty" (complete with changing the shape of her eyes and mouth, turning her blond hair into highlights with darker roots, and adding heavy make-up), while leaving Leon still mostly looking like himself, is consistent with the well documented gender bias of gen AI, and its datasets.
DF, I'm not so sure this is rendering. Rather, this is a deepfake. If it's using generative AI, which almost certainly seems to be the case given its virtually identical resemblance, then not only is it painting over the original artistic intent, it's stealing from other artists to do so.
I would've expected a more critical appraisal from the DF crew for something this transparently controversial. Hopefully your follow-up discussions can shed more insight on this. This tech should be scrutinized, not accepted uncritically.
I needed to make an account just to say that it looks awful and I'm actively against this horrible AI grok yassification filter
I genuinely thought this was a joke when I first saw the pics posted online. Absolute unbelievable dogshit and an insult to artistic intent. What a waste of talent and money by nvidia
I love how Prof Hecat from Hogwarts Legacy, who was once aged by being "wounded by time itself", is yet again aged thanks to nvidia this time. almost all faces look like they receive movie poster level dramatic lighting regardless of the environment. ofc nvidia would want us to shell out for two 5090s but it doesn't mean that it's good for the gamers or the studios, or even exciting if I'm honest.
Just sad to see this, and also afraid of the future of the already depressing AAA industry.
This is obviously beyond lighting. This is an aesthetics overhaul.
If you think this looks like a logical "step-forward" for gaming visuals, then you have to be a fan of generative AI imagery and its general appearance, or maybe too numb to recognize it.
These characters don't need AI face-filters adding makeup, fillers, dye-jobs, and a general uncanny look that doesn't match the artists original input or intent. This looks closer to a mod. NVIDIA tech changing the artists and developers aesthetics and design to "fix" their games is a bizarre thing to be excited about.
I am honestly confused how DF can't spot this uncanny, AI face-filter look a mile away and trying to pass this off as a normal thing. Its goofy looking stuff that, unfortunately, a lot of people will enjoy because a lot of people enjoy generative AI imagery slop.
Don't want to jump on the bandwagon but these genuinely look out of place. I wish they would at least have some more cartoony/non realism examples because all these examples look like the AI just guessed what a face should look like based on training data instead of increasing the clarity of what's actually there.
Guys, stick to the technical stuff. The fact you didn't see the audacity here and slap in the face to art directors / character designers /animators /concept artists etc who made these games, is shocking. But I get it. You're all nerds who don't create art or really understand what goes into it. Really embarrassing.
Endless praise and zero pushback.
Not even a tiny bit, not even questioning the premise of "anticipating the artistic vision of game developers" and what that a truly means in the context of over half of the GDC 2026 State of the Game Industry survey participants stating that they think generative AI is harmful to this industry.
You parrot Nvidia's PR line about "positive feedback" from developers but did you actually ask any of them at GDC for their opinions on this? Because they already seem to have a firm stance.
You did not even try. This might as well be an article on Nvidia's site. This is not independent journalism.
Also, here's a fun question: Did Capcom, Ubisoft and the other publishers that own the rights to the games in this DLSS5 presentation look at these results beforehand and think it was a good idea for their brands to have Nvidia publish these modified assets to be mocked online? To have the creative endeavors of their employees (which are the results of many years of work) trampled on by this AI filter? Do you think any of the people that worked on the Grace or Leon models (or any other character models featured here, or the very detailed background assets) looked at this and said "Yes, this is obviously what I had in mind"? Do the staff of Digital Foundry truly believe that? Seriously?
I can hardly believe what I’m seeing, wow. The forest for example looks great. The faces, I’m not so sure about. Did really only the lighting change? I’m saddened by all them extreme reactions. Hang on guys!
This looks fun to try out (like most AI stuff at first) but as a default option? Hell no!! I'll be avoiding this like the plague. Also, someone go give John a big hug, he probably needs it after this nonsense.
It’s really astonishing what it can do — I’m especially blown away by the realistic lighting it brings and the textures. On faces, it’s actually almost too good: you go from a face that looks totally unrealistic to a truly realistic face, which makes some people say the face is completely different. They don’t understand what’s really happening: we’re moving from a dead, expressionless face to a living, realistic one — that’s the big difference. On starfield, Faces that looked really bad with dlss off become more realistic with dlss on. In any case, when you see what AI can do for graphical optimizations, it promises incredible things for video games in the future. And Nvidia’s DLSS is really far ahead of FSR or PSSR.
Wow .. talk about shooting the messenger. The comments here are like hearing toddlers whine about not getting their peas on the right side of the plate.
The technology are super exciting and will give game developers even more choice and options to realise their full vision for the game. Or simply giving gamers a CHOICE to enhance the game. I only hope that this technology won't create a divide in games on PC to be played ONLY on an nvidia or AMD card, because you'll need their software solution.
Created an account just to come say, these are the same naysayers that said you'd ruin your rep over RTX.
Comment section is a clown show.
@Surefire you should actually watch the video, there are plenty of negatives mentioned... like running on 2x5090s.
I jumped through the hoops of joining as a free Patron to say this looks absolutely dreadful and you should be ashamed for saying a single positive thing about it. It's slop. It totally changes the character's faces. I cannot fathom how this is getting any praise at all, lease of all from DF. This is embarassing.
The RE9 slop filter gives Grace a totally different face in both screenshots. This is absolutely nothing but slop all the way down.
DLSS now stands for Deep Learning Super Slop.
@TheHarold Yes, saying anything positive about this is embarrassing and absolutely makes me respect them less.
This looks horrible and I dread the future of rendering if this is where it's going.
It just adds an AI slop filter over everything, neutralizing any distinctiveness that comes from each games' art and lighting direction. The faces look particularly awful and out of place.
I could see this being used for materials and textures but it should definitely not be used for faces and lighting. It just makes every game look like a bad reshade filter.
Been watching you guys for many years and it's the first time I've been dissapointed with a take of yours.
This looks OK at first glance, but it's like a different artistic intent, like different characters in places. Not sure I like that at all.
What's the word the kids use when a face looks TikTok filtered... Yassified! That's what Grace looks like here, TikTok filtered to f**k!
Not Yass Rich, more Nooo Rich!
On a side note this is what it took to get some serious engagement in these comments sections! Negativity does sell! lol
Remember it's optional guys, personally, I want to see what this does to Cyberpunk.
@NetshadeX It is optional, but it basically says screw any artistic intent we are just going to put an AI filter over people's work.
I agree it looks OK, or maybe even good in a few shots, but the side by sides show how much it has altered the artists original intention. Tell me Grace doesn't look TikTok AI "Pretty" filtered.
Is that really the direction you want gaming going?
For the DF video we learn that it's only about changing the lighting … not the meshes, textures, etc.
You can see that if you watch closely what DLSS 5 is doing, and listen to the explanations of the DF team.
Very interesting technology, looking forward to more coverage of this … and all the other neural rendering, ML led techniques we're going to see emerging over the transition to the next generation.
@themightyant
Like DLSS earlier versions the devs can control this (lighting) technology in game settings — what you appear to be referring to is if there's mods to put DLSS5 into game that don't natively have it set by developers.
@themightyant I'm fully expecting this to remain a toggle like frame gen or upscaling so if the changes to artistic vision bother you then I'd encourage you to not use it. Personally I think stuff like Nexus Mods etc already offer options to alter artistic vision and a lot of devs don't care about mods or even encourage them. I'm going to decide on a per game basis if I'm using it or not. I actually liked the Grace example but the Skyrim one not so much.
The comments here and under the YouTube video feel like they're all pulling from the same bucket of angry gamer buzzwords.
I, for one, think this is a positive development and whilst I feel like it brings an overexposed look on quite a few shots in this current form, I do see the incredible potential here.
More so though, I am intrigued by the reaction of the audience who are posting comments. I wonder if some aspect of this is down to this being such a radical change versus the incremental differences we've become accustomed to across gaming, especially since the start of the HD era. Are gamers so conditioned to the typical "game" look that ANY radical jump forward is going to be shunned?
This tech does seem to walk much closer than ever before to the edge of uncanny valley. My guess is it might take a couple of iterations to push beyond that point and get closer to photorealism to shake that.
I hope the team spend some time during the next DF Direct really putting down some of the more reactionary language of other commenters here. It's important to understand the distinction between ML and Generative AI.
@designgears
I'm old enough to remember when gamers were complaining about developers focusing on newfangled 3D polygon rendering and how game graphics should be about sprites, parallax scrolling, etc.
Plus ça change.
I think it looks awesome. If you do not want it, do not use it. It is literally as simple as that.
@NetshadeX
But is it a change to artistic vision if the game artists put it in the games themselves — which is discussed in the DF video.
It's not like DLSS upscaling or frame generation appears in games without the developers putting it in there. 🤷♂️
Plus like path tracing, frame insertion, etc. it's going to have an off/on switch … for the performance hit to an Nvidia GPU if nothing else 🙄
Perhaps some modders will patch it into games that don't natively support it — but people don't have to install those mods.
I'd really like to see videos of extended gameplay with DLSS 5 turned on, without the constant switching back and forth between On and Off. It's such a big difference that it's hard to feel the immersion of the scene when you only get a couple of seconds to take it in before they switch back.
@maxeez0323
Can you explain how?
Do you think Nvidia somehow got these demos together without the developers from Capcom, Bethesda, etc.? That they let their assets be used in these promotions without their approval?
If the artists use this lighting tool when building a game, does that mean the creativity remains integral, and that it's acceptable? Perhaps not using it would then be going against the creative design?
@NetshadeX FYI It's Oblivion remake not Skyrim. I think in a few shots it looks OK, impressive even in places, but it's also like someone has taken your shots and run them with their own Lightroom preset. People already complain that UE5 games look alike, that is only going to be amplified if they are all run through their AI lighting filter and yassified faces first.
@StooMonster Yet I agree if artists are using it then the artistic intent argument is moot. But it doesn't change the other issues with faces looking AI Filtered, and other issues that AI has whereby it changes shot to shot. E.g. Look at Grace in the first two RE9 side by side shots. In the two NON-DLSS5 versions she looks like the same person in different lighting conditions, but the two DLSS5 version she looks like an entirely different person.
Thankfully with this currently running on 2x (checks current pricing) £3,000+ 5090s (each!) most aren't in danger of having this inserted into games any time soon. And today I'm thankful consoles are running AMD for once!
@StooMonster Well, that's why some people are angry at DF. Because it's clearly not just lighting. Grace's lips look completely different. At one point a character blinks and his eyelids go bonkers. There is obviously AI layering on top of the characters' faces.
Don't get me wrong, I don't blame Rich and Oliver. They have their personal opinion, whatever. But they should have known this will be controversial and should not have rushed with a quick coverage from a hotel room.
Long time reader / viewer, first time commenter. While I appreciate the technological leap here, I think there's a moral bankruptcy at the heart. It defaces the original artistic intent, likely via an AI model that has ripped off other artists, and looks to perpetuate visual biases that have become inherent to the datasets of these AI systems. There's something very troubling here - I too have long looked forward to true photo realism in games where that's appropriate, but this approach is just too problematic on many levels - artistic, economic, ecological even. I have a lot of respect for Digital Foundry, and hope you really interrogate the implications of this technology, and open a dialogue with the designers and artists this is going to affect. At present, it feels a bit too much like you've drunk the Kool-Aid, which is not what I've come to expect from your otherwise exemplary reporting.
(And for the record, I happily use frame gen and earlier DLSS models)
@Mookmac
Rings parallel with:
➡️ Native resolution rendering is the only thing that matters — ML upscaling is cheating, brute force rasterisation for the win
➡️ Native FPS only — frame-rate insertion is fake frames
➡️ Baked in lighting for me — ray-tracing looks worse than pre-rendered bitmaps
➡️ AI slop — new lighting model ruins creative integrity and artistic vision.
It's all the same.
@StooMonster a lot of this is really going to come down to folks who can clearly see the genAI filter look here and others who, seemingly, like that look. None of the prior DLSS iterations had this drastic of a change on artwork. This tech is disrupting the visual language and all the talk of “it’s just a change in lighting “ isn’t going to fly with folks who are clearly seeing a big change in aesthetics.
@stoper
Why do you think they're lying to you?
Why not?
@Dort
This isn't going to magically appear in games without the developers putting it in the games.
Talking of things that deliver a drastic change on artwork, path-tracing doesn't magically appear in a game either.
The game developers put it in there.
What if the devs develop the game with this technology at front and centre … and so turning it off is therefore going against their vision of the game? i.e. Turning it off has the drastic change on the artwork? 😱
@themightyant The consoles may be running AMD, but this is exactly the kinds of technologies that Microsoft are talking about for Project Helix and Sony are same for PlayStation 6.
This is just the start of the next transformation to gaming tech—like sprites to 3D, software 3D to GPUs, etc.—it's not going away, and there's going to be a whole lot more of it.
@StooMonster I don't think they are lying to us. Nvidia is. And they just repeated what they were told. When they should have sat down and analyzed the images carefully.
And "why not" should be clear by the *****-storm that's going on at the moment. They are being blamed as if they themselves developed the f-ing DLSS 5.
@StooMonster Do you GENUINELY not see the AI filter / uncanny valley when you watch the faces in motion?
To be fair as @Dort said some people don't seem to see it. It's like thousands of people following AI models and commenting to them as if they are real people. Some people just can't see it. For me the uncanny valley is strong with these, especially in motion, gives me the heebie-jeebies.
But it is impressive tech.
@themightyant
I see the uncanny valley effect, semi-realism causes that on all humans and is a well documented phenomenon.
It's the "AI filter" comment that I don't agree with. Perhaps if people's understanding of tech is SnapChat or TikTok filters (not how they work, just turning them on) then that's what they can articulate as their understanding.
I'm interested in hearing more about how this fits in the rendering pipeline — also this clearly doesn't work with path-tracing, which I think was mentioned in the video.
It appears to me to be either a) a low cost way of improving the lighting in games that don't have ray tracing, or b) optionality on games with ray-tracing.
Or have I got that wrong on b) I know it doesn't work with path-tracing does this work with ray-tracing … or is it only for games with rendering that has no RTX?
@stoper
The conspiracy deepens!
You think that DF are unprofessional and don't know what they're talking about?
Good job DF, keep it up!
don't listen to the haters. Obviously, the technology is impressive and everyone has the option to use it or not. I'm eager to see how the people who are scared of it now will be using it later, just like what happened with DLSS 2 through 4.5
So now upscaling is going to mess with artistic intention? Oooooooof, this ain’t it.
I don't get all the negativity...this looks awesome...these same people criticizing DLSS v5 are prop the same ones praising PSSR v2...both are AI upsampling tech but DLSS v5 is far better than PSSR v2...I couldn't even make out any improvements in the images in your PSSR v2 coverage....
It's another sad day on the internet, but I guess that's what happens when overdramatic people that don't have their priorities straight populate a public space.
DLSS5 seems interesting and I am looking forward to seeing what artists will be able to achieve with it.
Everybody else, please hit the toggle switch and stop harassing people for their opinion.
Thank you, have a nice day and hugs for everyone.
I don't understand why everyone in the comment section is so mad. I've always wanted my games to look uglier for the sake of realism, so I think that this tech is a big step forward!
@StooMonster I think the two are linked. Uncanny valley and TikTok/Snapchat filter, it’s not a lack of articulation or understanding, this is what it looks like to me.
E.g. If you watch a filtered video made on those platforms you quickly see that it isn’t a real person anymore, it’s been “enhanced” (yassified) but also dehumanised. It may on first glance appear better in some way but on watching it in motion it just sets off an alarm in your head, that’s the uncanny valley.
This is exactly what these DLSS5 shots are doing to many of us, it just looks off. Yassified. But like AI models not everyone sees it.
Yeah, what others have said! Also, any word on the performance of crimson desert on base consoles?
@VeganH Asking the really important questions here 👍
Let's all cool down a little.
So does one of the S’s in DLSS 5 stand for “Slop”?
@MrJhan Bro, they just do not mention intrinsic issues of what this could bring. This isn't just upscaling or generating new frames. this is completely ruining intent. It's basically a deepfake and this article is written like a nvidia blogpost.
@StooMonster It almost doesn't matter what the developers want. Its the higher ups making these business decisions on what to promote. Its really just a ultra realistic AI deepfake of a game, how does it not ruin how the game was originally intended as? Its also a user-side setting that won't be the same in how it outputs.
To your next point, if developers really build their game with this in mind, I do not care for it as it looks sloppy. Its not appealing. Most gamers won't even be able to use this feature too so I doubt at least for a long time, games wont be built for this in mind.
@TheOvy I agree with the requiem addition. they should of just left that game out of the presentation and Hogwarts too . They looked so unusual and fake. I think the Starfield and elder scrolls showed promise, really kept in touch with the original poly base of the characters but just corrected the lighting and detail in a way that emphasized the scene and detail. Its new, and shows promise. Trust me I get it, 1st fake frames, now, fake images to eat up your fake frames we made earlier.... But its a tool at the end of the day, you can use it or not. Lets face it, if it takes 2 x 5090s to run this thing it as we see today then even 1, 5090 that, 98% of people don't own you wont be seeing this anytime soon regardless. Looking at games from houses like Naughty dog, Crytek, Remedy, Rockstar to name a few,. Those developers don't need this technology, they are so well made, animated and lit in a way that takes incredible skill and art direction. This is just another tool to use if needed, its an expensive one too both in resource and cost. So lets just see how it develops without jumping to this horrendous hate. there is too much of this in the world, and this used to be a safe community, but alas its starting to fold under the weight of the anonymous hate speak that can get puked out with no consequence.
I respect that you guys have your own perceptions of quality, but this really is awful. Seeing actual artistic vision mangled like this is disgusting, and your coverage of it makes me doubtful of your bias. I love you guys but you need to learn where to draw the line.
This is just awful. Completely changing how characters look from the artists intention to deviant art AI junk.
The tech behind this is impressive but it is replacing the intended art with what DLSS thinks is 'better'. Theoretically, the look of any character could be changed to whatever face model DLSS uses as it's reference; if that was something the user could choose it may have certain benefits in certain types of games, but not in this case. I tend to agree with others that it looks too generic, even if it's more detailed.
Generative AI is the modern version of clip art. It should be acknowledged but its artistic value is nowt.
Big techs influence has ruined this once reputable website as well I guess.
I had to sit with this. I agree that when looking at lighting, shadows, distant environmental rendering and how it interacts with those aspects we can see a noticeable uplift in perceived fidelity. That said, I find that the "fidelity increases" dull the unique look of the games. They no longer look distinct and aside from big setting differences, castle vs city, if you had to guess which is which with Deep Learning Super Slop on, while focusing on a scenically neutral spot, you'd be hard pressed which is not good. Then there is the glaring "Yassification" of the characters, near ground environment features, and general change in tone of the scenes which is damning. I think Nvidia has shot too close to the sun here.
Disappointed with Oliver's rushed VO work in the video. He tries to cover for the bad Generative AI Character filter. Then Rich claims that the renderer isn't doing anything except changing lighting when we can clearly see Grace has transformed into her most Yassified Girl Boss self. Also the comments about how the environment looks more true to life, I couldn't disagree more. It looks worse than most PS4 baked-raster games. I found that part to be disingenuous and harmful to DF's credibility moving forward. It will take a long time to earn trust back from that.
Of course I look forward to less rushed and more long form thoughts from the team and better justifications for why this looks good.
@Mookmac Hey. Let me try and explain the outrage. Guessing you've never worked on a game or large creative project of any kind.
So what happens is a team of people from Creative directors, Art directors, concept artists, environment artists, 3d artists devote 4-10 years of their life, 40-80 hours a week making something with a particular vision, they execute that vision, they are proud.
Then some sweaty fat engineer in cargo shorts and socks and sandals you've never met in your life says 'no' this is how it should look, and totally changes the vision you and a team of people have created.
Hope you can understand.
@Sausages Here's why this isn't an issue. People that don't like it don't use it, people that like it do so in the comfort of their own home with the product that they bought. If anything, ownership implies I can alter something to my liking.
@NetshadeX
Lol, great response totally ignoring the point I made. People like you ruin entertainment and art. Enjoy the ai transformers movie you'll watch in silence in some boring grey German town somewhere
@Sausages With all due respect, do you believe that doesn't already happen?
Character design for example is driven by marketability and/or ideology already.
It's a product, you want to reach a certain audience and you are trying to please that audience.
This is just another tool to realize the artistic vision of the developers and artists in the best case.
Worst case it looks horrible and you toggle it off, like any other quality setting.
We'll have to wait and see how it is implemented from the ground up, instead of this bolted on presentation.
@Sausages I didn't ignore it. I just misjudged your ability to recognize my answer to it. No problem, I'll spell it out. Artistic vision goes for every product, game or otherwise. Anything that has a design was at some point put on paper. When you purchase said product, you are free to alter it. Nvidia just offers you an optional tool to do so, just like after market car parts are sold by distributors that allow you to alter the look of your car with them.
I don't have a problem with this at all! Thank you NVidia, for making the choice between Microsoft/NVidia and Linux/AMD easy as heck for me. 😂
@NetshadeX @Ukigumo
If you think art should be altered after the fact then you just don't appriciate the medium at all, or respect gaming as an artform. I guess you guys fit into the crowd of people that mod their game characters to have huge knockers. All the power to you. Just know you're in the minority. And billions in investment doesn't cover it.
I do see how, as a first iteration, there's a way for DLSS 5 to improve lighting (although I'm not 100% convinced by TES IV, it seems overly bright and with not enough contrast). But, at the end of the day, DF needs to point out the flaws more and for now it doesn't look like this technology is ready to do faces (one of the most complex task in rendering!). Some faces are OK, especially ones that were lacking details in their original version. But the first shot of Grace clearly shows there is a problem. And it will be amplified by the fact that every scene you play, you could have a different looking Grace. You won't even be able to identify her correctly as a character in your mind, it'll be a blur.
@Sausages If you saw my Double D Leon S. Kennedy and Kratos mods, you wouldn't talk that way 🤣
On a serious note, I respect how passionate you are about the subject, but I feel like you are avoiding a real argument.
Instead you seem to claim what one person or the other thinks and feels about games and art in a discrediting way.
I might be wrong, but to me, you come across this way.
Sorry if I'm mistaken, I'm writing this with no I'll intent.
One of the strengths of a PC as a gaming platform is its ability to mod, enhance and alter content to your liking.
You strictly don't have to do this, but there are people who enjoy it.
How the Assetto Corsa community has transformed the game for example, is nothing but astonishing.
The passionate work of modders and people that spend their time on projects like Fallout London should not be dismissed.
Or the great addition of RTX features to older games like Portal.
This is what I think about, when I hear PC modding. If the first thing that comes to your mind is "huge knickers", then maybe it says more about yourself, than anyone you are accusing.
In any case, I wish you all the best and am looking forward to talking to you maybe on another topic. Who knows we might even agree on something.
I think it is interesting use of AI technology that does make some games look better - Starfield for example which has pretty awful character models by default - and it reminds me of something like RTX Remix or those fan made mods on Nexus Mods that aim to change the visual look which are a matter of preference as to whether they actually look better or note.
I have two issues with DLSS5 right now:
1. This should NOT be called DLSS5. This technology is something completely different from image upscaling and should be called something else DLFX (Deep Learning Effects) or whatever to differentiate and avoid confusion.
A game with this effect should have DLSS as a separate option.
2. These demos are obviously being shoehorned into games that were not designed with DLSS5 from the onset so the results are somewhat mixed. Some games look fine, others look too different, uncanny even because the AI is striving for photorealism when the game was not designed that way, e.g. Hogwart's Legacy with its pleasant cartoonish looks.
Got to admit that I was surprised at the backlash against this. There is no way that this will replace the original artistic intent because the game still has to be designed to look good on AMD GPUs and consoles with don't use NVIDIA hardware (and the Switch 2 certainly isn't going to be running this either, let's face it, outside of a developer tech demo maybe).
I think a really key question here is to what extent Devs can continue to express artistic intent using this tech. As far as I can see we don't really know yet. Sure the examples in the video do not represent artistic intent but that's for obvious reasons - the devs didn't make the game using this tech. I would agree that I don't want my games turning into GENERIC AI slop but do we know yet how this tech can be used by devs? I think we just need to wait and see really.
@Sausages - But modders on PC do this all the time anyway... just check out Nexus Mods for any game. Tens of thousands of graphics mods to tweak character faces, game assets and textures as well colour tone and lighting etc etc.
DLSS5 is just another mod. I am fine with this as long it is an option and doesn't replace the original artistic intent. I mean common sense tells me it won't because the games still have to run on AMD consoles and GPUs.
As such, this will surely only ever be optional and hopefully NVIDIA will see sense and separate this 'filter' from actual DLSS which is used for image reconstruction/upscaling.
@Sausages Not sure where your assumptions like me living in a boring German town and that I alter characters to have big knockers come from but maybe just respect another opinion?
Looking at the 4K comparisons more closely, there's also a peculiar way it does eyes, noses and mouths that tend to homogenize characters' renderings and I think people will get fed up with it at some point the same way they're fed up with UE5 idiosyncrasies.
There were always complains regarding DLSS. Are "synthetic" pixels even real? Are fake frames even real? Do artificial rays seem right? Now we get into machine learning texture/shading/reflection manipulation.
I like what technology brought us so far, and that it keeps evolving. It is important though to have options to toggle off the new tech completely or parts of it, until it is mass-accepted as an overall win. I would tune down faces retouch on first playthrough of a game, but would gladly see it on max for a second spin. Other materials and lighting enhancements overall look better, but some cases need to be worked on. Consistency is also key.
This tech and much more are coming, and bashing on the team for their coverage counters the main purpose of DF. But a more objective analysis is indeed welcome.
In the future I suspect that DLSS 5 will be seen as the point where Nvidia jumped the shark.
Very premature reactions from many people on YouTube etc and here. Learn more about the technology, and give DF some credit for the fact they have actually seen this running. Remember also that developers always have to compromise their artistic vision when they ship even their own stuff. I want to hear more about the level of control developers have. Reading a write up by 'Ryan Shrout' (former intel i think) was interesting in this regard.
People also should realise this was coming. and it will get better and better, it also benefits from high quality inputs. Developers will learn to merge their authored inputs and then tune the neural part. Also people saying that it's goin to be inconsistent frame to frame.. seriously? Do you think they would announce this if that was not a hurdle they have obviously overcome? None of you can judge this until you see how game devs use it, and crucially have used it yourself.
There is a huge herd mentality around 'AI slop' on the internet, and it's like it's also turned people's critical thinking into 'slop' too. But I do agree, the one shot I thought was not as great was the shot of grace (the first shot, the other one looked amazing) but really look at it, the geometry is not changed, it really is the shading and lighting, and also try to imagine what grace as she looks in that shot would actually look like in reality. Many games do have very attractive and idealised characters, grace has a very symmetrical face with unblemished skin mostly, and many female lead characters in games, surprise surprise.. are models in real life. Anyway, I went and looked at many other comparison shots and I thought dlss 5 looked better. If devs can truly tune it and mask elements, then you could have glorious looking environments, and subtlety improved looking characters, where they get that last few % dialled in via dlss 5.
Strap in Digital Foundry, because I think your going to be in for a bumpy ride... Disgruntled gamers are one of the most exasperating groups on the internet.
I have to say, the visual impact of this technology is quite dramatic. I agree that the ambient occlusion and smaller shadows look greatly improved. However, I have my reservations on the lighting aspect, which is what produces the greatest visual impact.
My concern is that lighting is a large part of atmosphere and art direction that is now being to a larger extent handed over to AI. You say that this is mitigated by some "levers" developers can move to change the impact, but I do not think that any such large overhaul to a lighting system in a game can accurately retain the artistic vision of the director and the art team.
A parallel can be drawn to GBC emulation — on modern screens the colours are often oversaturated due to the original developers having to compensate for the screen technology of the time. So which one is the artist's intent? Obviously, the image produced on the original hardware, since that is what was used as reference during development on how the games should look. Similarly, artists for modern games use the rendering technology that is available to them to create and adjust the image they have in their head. When this technology undergoes a change as drastic as seen in DLSS 5, it runs the risk of overcompensating for changes done to align with the vision when viewed through the lens of the old renderer. It also risks revealing details and flaws that were not visible using the old renderer, and as such, not taken into account by the art team when creating the models and materials. All this might result in end presentation that is far from what the art director etc originally envisioned.
Lastly, simply because the developer includes a technology like DLSS 5 in their game, does not necessarily mean it aligns with their vision. Oftentimes such decisions are not beholden to the opinions of the art team and are made by the higher ups in an effort to make the studio more "forward looking" and in step with technology, which can be seen as positive by investors and shareholders.
P.S.
It seems that a lot of the comments and feedback on this topic is more reactionary than constructive. I wish you the best in surviving this heated controversy.
Real time AI slop, now in your videogames. Great, thanks Nvidia
@RichardTucker Yes I've too noticed the word "slop" being greedily adopted by the frenzied masses. Stuff like that always reminds me of those scenes in the middle ages where some person is brought to a scaffold in the town square and the people gather in masses to shout profanities and throw rotten fruit without even knowing anything about the case or person in question. Just because we carry smartphones these days doesn't mean people have changed.
@NetshadeX ha yeah good analogy.
@maxeez0323 I don't know what we should expect DF to say after sitting a couple of hours at a supervised demo session and not being able to fully dissect the technology by themself. For not bashing it out of the gate they did mention "is Nvidia's interpretation of that future what gamers and developers actually want?" and "There's a lot to process here - and there's the sense that we're still not fully aware of the full implications of this technology", so they did lift that there could be implications from gamers and developers.
But from a technology standpoint this is amazing to achieve from a single GPU running realtime graphics, and something I expect DF to report on, without them telling me how to feel about it.
Just my thought.
This is the first DLSS feature I've ever had a negative reaction to. On a purely technical basis, I think it's cool and interesting that we've reached a point where something like this is possible in real-time rendering.
But that doesn't mean it's good. I'm sure a deeper dive on technical specifics of DLSS5 is coming at some point, but it doesn't FEEL accurate to describe what happened with the RE9 Grace examples as a simple change in scene lighting. It might well BE accurate, to the extent that scene lighting also includes things like subsurface scattering, radiance caches, etc. Whatever the method, she was no longer recognizable as the same character and that's a bad result.
Overall this feels like an overreach by NVIDIA and a step in the wrong direction. DLSS SR, FG, RR are great models and I wish NVIDIA's next ML model had been something along those lines.
With DLSS5, it feels like their working definition of "graphical fidelity" changed from "fidelity to the game's core render" to "fidelity to whatever version of reality NVIDIA trained its world models on". If that's true, it's a major shift and one that's not good for anyone.
I don't understand the vitriol.
When it's released, don't use it.
I don't mind it, especially since the developers seem to be buying in.
Im not sure I’m keen on this or not. The faces are particularly uncanny in motion, but I expect like DLSS 1, all will improve with time. But congratulations on the coverage, not sure why DF are getting a kicking, seems a bit of shoot the messenger.
@Rich_Leadbetter Long time follower first time commenter: Please please revaluate your stance on this. I've been blocking AI generated stuff on socials for a while, and now my games gonna look like this? Why? It would be fine if the artists did this themselves out wild look a million times better but AI? What will happen when devs figure out they only need to do the bare minimum and let AI polish everything? I don't like the word but it will be slop all around! Help us, DF you're our best hope!
Mantap kren abis
The true "slop" is the knee-jerk "AI slop" comments from people who have no imagination for how game-changing this technology will be as it matures.
@MrJhan That it is changing games art styles and why is this a thing? I want my game to look like a game not an AI porn ad. They had like a singular line, sure but impressions are what matter and they are falsely praising this. It doesn't look good.
From a tech standpoint I agree, I genuinely think feats like this are cool. However, its execution is insanely poor and its kind of a step of "who even wanted this in the first place?" It's purely artificial and varies from user to user. Its currently processing DLSS 5 on a 5090 and a separate 5090 for the game processing. I don't really know how they'll scale this down in time.
I signed up just to say that the utter lack of respect for your audience in posting this gushing slop about slop means I've checked out of everything not retro-related. Previously I have at times disagreed with your takes on whether pre-baked lighting looks better than raytracing in scenes and other minor differences, but if you can't see how this utterly changes the artistic design by developers, your analyses mean nothing. Nothing at all. From lighting with non-existing sources to lost light placements to facial features that are completely altered in appearance and the characters no longer look like the same person... Whose supply have you been on? I hope the Nvidia money is really good, because you'll be bleeding support over this.
I created an account just to voice an opinion as well after maybe following digital foundry for more than ten years.
This absolutely maims artists and their creative vision. You are letting AI to make aesthetic decisions based on "likeness" and feed it to the consumer as "invention"? ***** hell.
After corona outbreak everyone and their mother released a half baked slop of a game because people were staying at home to play video games and the numbers were at their peak. How did that backfire? Gaming industry almost halt to a stop because consumer dont have time for your undercooked *****. Now you are taking a big step in this direction to let developer make even less baked ai slop of a ***** because they can "relay" on dlss 5 to fix their non-existent vision.
Bit baffled by the controversy over this. I watched the video last night and was impressed, and am looking forward to its release. I wish I had been able to play RE:R like that last week. People can fight over 'artistic integrity being compromised' but come on - it is just a tech demo. They said they got Todd Howard to sign off on the Starfield changes, so it wasn't all done without consent. In some games we want as close to photo-realism as we can get, and the truth is AI is just going to help. Lighting is remarkable complex to model, and game engines have been doing tricks for ever to make things look 'better'. This is just a tech demo of HOW this new tech could improve things a stage further.
We all hated on frame-gen when that came out, but the truth is when it works nicely, it just helps. When DLSS5 ships you'll probably be able to inject it into older titles, have a play and decide if you want it on - but anything new will have been tuned and signed off by the team. If it looks anywhere as good as those demos I can't wait to check it out.
Depressing that people can't engage sensibly on an interesting and potentially divisive issue. I do think Nvidia should have eased folk in to this, the announcement pics feel like they have been dialled up. But the results are impressive, whether you like the aesthetic or not, and it'll be a hugely powerful tool.
Call a spade a spade. This looks horrible and is a perfect example of billion dollar companies forcing nonsense no one wants. This looks objectively awful and I couldn't have lost respect for df faster. Just say from moment one what everyone not paid by Nvidia is thinking... this is awful
There's a lot of people misunderstanding what this technology is doing. It's not using "AI Image Generation" technology like you see in ChatGPT or Google or Adobe etc. It's not an "AI Filter" either. It doesn't learn from images scraped across the internet.
The only thing it's "generating" is a new pass of improved lighting effects. Yes, some of these have a dramatic effect on the way characters look, but it isn't actually changing any of their geometry or textures. All of that detail is already there in the normal maps. It's simply understanding that detail as micro geometry that can cast micro self shadowing, and doing other things like light penetrating surfaces and simulating pathtraced lighting. The original artistic intent is fully preserved. This just gets us closer to what the artist had in mind than the original engine could.
Great work showcasing the first look at this tech. It looks very promising.
It is seriously impressive that this model runs (just by itself) on a 5090. But holy hell does it look awful. It is abominable what it does to the games. Every single example it just kept getting worse. The way it sanitized some of the scenes from Assassin's creed and Obvilion just looked horrendous.
@T_W my immediate thought was "John must be hating this right now"
This is a perfect example of how a multitude of subtle changes can dramatically affect a character's appearance. This is not simple enhancement. This is generative AI using inferencing and algorithms are guessing and making judgments about how a character should be presented based on the given character model. If you put your face through an AI image generator, you start to find subtle changes to your face that completely change your overall look and presentation in such a way that you no longer recognize that generated image as you. This is what DLSS 5 is also doing. These are dramatic reskinnings of character's faces being generated on-the-fly by algorithms via generative geometry and remodeling of displacement maps; the textures seem to be reprocessed too, perhaps to tease out additional detail via sharpening or the generative renderer also touched those pixels in an additive way by trying to remove typical rendering errors. It's not just simply enhancing the character model: it's doing a multitude of subtle changes that inevitably change the character's face, including nose, eyes, mouth, and I also see hair being changed in certain ways too. Female characters seem more drastically affected than male characters, and this is due to inherent training bias in AI algorithms.
DF, you should not be supporting this tech. This is too far a step from simply improving upscaling and improving framerates. This is reimagining art on the fly and inserting it over the original content (and it is inserted, as DLSS 5's generative AI is fully tied to frame generation). I just can't support that.
@JasonMZW20 I'm sure DF will do an extensive dive into the tech soon but for now, watch this instead, specifically the part from 0:37 and up. You'll see Grace's face is fully intact from an assets perspective, the dimensions don't change anywhere. It's the addition of light and reflection that alters the look so dramatically. People do mistake that for an AI face generator when really it IS just the addition of reflection and shadow that does the altering. Please take the time to watch: https://youtu.be/JKDW9WAg-EQ?si=WvYBtCGaULmvQFew
From the DF article: "While there could be some comparisons to generative AI, DLSS 5 is consistent and coherent in its rendering of the game world,"
This kind of suggests that there isn't any generative AI involved.
From Tomshardware
At a press Q&A with Tom's Hardware at GTC 2026, Nvidia CEO Jensen Huang downplayed criticism of DLSS 5, the company's new use of AI and neural rendering to infer how certain features of games would look if they were more photorealistic.
He added that developers can still "fine-tune the generative AI" to make it match their style, adding that DLSS 5 adds generative capability to the existing geometry of the game, but that it "doesn't change the artistic control."
"It’s not post-processing, it’s not post-processing at the frame level, it’s generative control at the geometry level," he said.
Huang also said that developers can try the tool and see how they want to use it, suggesting that it's up to a developer to try to make a "toon shader" or see if the game should be "made of glass."
"All of that is in the control — direct control — of the game developer," he said. This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering."
I especially love that last remark from Jensen, it's almost like he's having a hard time rationalising what generative AI is.
Think I've gone from scepticism, but deferring judgement until I've learned more, to "nah probably not" after Nvidia tried to make it sound less like Generative AI by admitting that it's Generative AI...still eager to learn more but they've got some work to do to convince gamers this is a worthwhile thing.
I’m seeing commenters saying it’s “just lighting”. It’s not. It’s clearly not. And even nvidia aren’t saying that.
"The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," - Jensen Huang
The tech is – potentially – interesting, but your takes on its aesthetic quality just felt a little... off brand, I guess? Each to his own and all that, but I'm not really into the super-high-contrast/yassified smartphone photography look.
@JayB0b it's indeed not just lighting. here's another footage outside the promotional material where they say it can infer non-existing reflections.
youtu.be/LHp9QDIwLZk
Todd Howard straight-up said: “With DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering. We’re excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles.”
Bethesda followed up publicly: the Starfield demo was a “very early look,” their art teams will tune the lighting/effects themselves, it stays under artists’ control, and it’s totally optional for players.
Capcom’s Jun Takeuchi praised it for making Resident Evil feel more cinematic and immersive.
Ubisoft devs on Assassin’s Creed Shadows said it lets them finally build the worlds they’ve always dreamed of.
These aren’t random indies, these are the exact “major studio creators” people mentioned is “ruining artistic creation”, and they see it exactly as the dreamed of: a VFX-style long-render result (photoreal subsurface scattering on skin, fabric sheen, complex shadows, realistic materials) delivered in real time, without needing brute-force hardware that doesn’t exist yet.
NVIDIA’s own technical breakdown backs this 100%: DLSS 5 isn’t hallucinating new faces or textures from thin air. It takes the game’s existing color buffers, motion vectors, and 3D scene data as input, then uses a neural model to infuse photoreal lighting and materials that are anchored to the source 3D content. Geometry, base textures, and artistic intent stay intact; it just adds the stuff traditional real-time engines physically can’t compute fast enough (rim lighting, contact shadows, translucent skin, etc.). Devs get masking, intensity sliders, color grading, full artistic veto power. It’s enhancement on top of what’s already there, not replacement. People calling it “AI slop overwriting everything” are literally ignoring the whitepaper-level description and the devs who are implementing it.
Honestly it's fascinating to see this reaction to a just another tool and a first look at a tool that isn't even out yet. Again it is the Internet, but it's remarkable to see how quickly things get blown out of proportion before there's even an understanding of what is happening on the tech side. AI hate is clearly real, but people blindly raging about it before any real facts have been presented is immaturity at best.
And really, DF is getting death threats over this? Let us try to remember that we're just here talking about video games people. I'm passionate about this stuff too, but let's try to maintain a little perspective here and show a little restraint and class.
@Surefire
Todd Howard straight-up said: “With DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering. We’re excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles.”
Bethesda followed up publicly: the Starfield demo was a “very early look,” their art teams will tune the lighting/effects themselves, it stays under artists’ control, and it’s totally optional for players.
Capcom’s Jun Takeuchi praised it for making Resident Evil feel more cinematic and immersive.
Ubisoft devs on Assassin’s Creed Shadows said it lets them finally build the worlds they’ve always dreamed of.
So, yes, yes they did.
DF covered this angle in their Q&A, but just re-stating it here:
Game studio approving their IP for use in a tech preview and providing a marketing quote is NOT proof that everyone in the studio feels that way. To me, those quotes speak to studio leadership's (understandable) excitement over the tech as a path to lower development costs.
I'd be surprised if leads & principals on art teams at those studios shared that excitement, but they weren't quoted here.
I’ve just watched DF’s video after reading all the articles about this DLSS5 and how the end of the world is here. I really don't understand the fuss. It mostly looks better to me. I won’t pretend I fully understand how it works, but this to me is exactly how AI should be used in gaming. Not creating stuff from scratch, but enhancing what developers create. Sounds like they have control over it too, so isn’t it just another tool in their box?
The internet really is a strange place.
Hello, Digital Foundry.
Fantastic work on DLSS5, graphical improvements are evident and I cannot wait for the final product. However, the 1st DLSS5 On image for Starfield does not seem to want to load for me, whilst its DLSS5 Off counterpart image does load for me. It's the scene with the two marines.
Thank you all for your amazing work, and keep persevering.
Just listened to the Q & A about just this subject (great show as always). Yes the graphics looks crisp and future facing as many a Nvidia demo has done over the years. But why the hate at a team. they are telling us about what they are seeing (with there trained eye) and for one I respect that. Gaming is an ever evolving medium which crosses boarders. I've many people on my friends lists that I've never even met, But we are friends and have a lot in common. The people that have given death threats to the team should be ashamed of themselves.
I see a lot of hate for the technology, but something does confuse me here.
Why should I, the consumer spending my money to play a game, have one iota of concern for what the artists intention is? I honestly don't think I could care any less for how an artist meant for something to look. We're all going to interpret things in our own way regardless, thats the beauty of art.
I see the original grace as a plastic looking blob, but the dlss5 version as a real human. Thats a huge step in immersion. Hard to immerse yourself if everyone always looks fake.
Show Comments
Leave A Comment
Hold on there, you need to login to post a comment...