Comments 2

Re: Nvidia's new DLSS 5 Brings Photo-Realistic Lighting To RTX 50-Series

LamboCow

@Surefire

Todd Howard straight-up said: “With DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering. We’re excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles.”

Bethesda followed up publicly: the Starfield demo was a “very early look,” their art teams will tune the lighting/effects themselves, it stays under artists’ control, and it’s totally optional for players.

Capcom’s Jun Takeuchi praised it for making Resident Evil feel more cinematic and immersive.

Ubisoft devs on Assassin’s Creed Shadows said it lets them finally build the worlds they’ve always dreamed of.

So, yes, yes they did.

Re: Nvidia's new DLSS 5 Brings Photo-Realistic Lighting To RTX 50-Series

LamboCow

Todd Howard straight-up said: “With DLSS 5 the artistic style and detail shine through without being held back by the traditional limits of real-time rendering. We’re excited to work with this new technology and look to bring DLSS 5 to Starfield and future Bethesda titles.”

Bethesda followed up publicly: the Starfield demo was a “very early look,” their art teams will tune the lighting/effects themselves, it stays under artists’ control, and it’s totally optional for players.

Capcom’s Jun Takeuchi praised it for making Resident Evil feel more cinematic and immersive.

Ubisoft devs on Assassin’s Creed Shadows said it lets them finally build the worlds they’ve always dreamed of.

These aren’t random indies, these are the exact “major studio creators” people mentioned is “ruining artistic creation”, and they see it exactly as the dreamed of: a VFX-style long-render result (photoreal subsurface scattering on skin, fabric sheen, complex shadows, realistic materials) delivered in real time, without needing brute-force hardware that doesn’t exist yet.

NVIDIA’s own technical breakdown backs this 100%: DLSS 5 isn’t hallucinating new faces or textures from thin air. It takes the game’s existing color buffers, motion vectors, and 3D scene data as input, then uses a neural model to infuse photoreal lighting and materials that are anchored to the source 3D content. Geometry, base textures, and artistic intent stay intact; it just adds the stuff traditional real-time engines physically can’t compute fast enough (rim lighting, contact shadows, translucent skin, etc.). Devs get masking, intensity sliders, color grading, full artistic veto power. It’s enhancement on top of what’s already there, not replacement. People calling it “AI slop overwriting everything” are literally ignoring the whitepaper-level description and the devs who are implementing it.