The ongoing RAM crisis - dubbed "RAMageddon" - poses a very interesting question for the development of new games. In the PC space, there's an ongoing situation where graphics cards with 8GB of framebuffer memory are struggling with the VRAM requirements of triple-A games. However, on the flip side, the pricing and availability issues surrounding any kind of memory right now suggest that maybe, perhaps, developers should be targeting devices with more restricted amounts of memory. The question is, how viable is doing it?

The discussion was raised in the latest episode of the DF Direct Q+A Show, where it was pointed out to us that you can split the discrete GPU userbase into three camps. Roughly a third have less than 8GB of RAM, a third have exactly 8GB of RAM, with the remaining third having more than 8GB of framebuffer memory. Now, what's interesting is that with GPUs like the extremely popular RTX 4060 and its 5060 successor, the capabilities of those cards have scaled much faster than VRAM. In fact, both of those cards represent a regression over the RTX 3060's 12GB of memory.

So, we have situations like the fact that the RTX 5060 is actually a pretty reasonable card in some respects. $299 gets you a GPU with Nvidia's Blackwell features on top of rasterisation and RT performance in line with a top-of-the-line RTX 2080 Ti from 2018. The problem is it has less memory (8GB vs 11GB) and is therefore less capable and less balanced than the Turing classic. You may need to turn off ray tracing or lower texture quality for a smooth experience. The result can be "a bit ugly" or downright "fugly" depending on the game.

So, should developers fundamentally change the way that games are made to embrace a new, RAM-starved reality?

In the short term, we just don't see the nature of games themselves changing for the plain and simple reason that this gaming generation has already established a precedent: developers and publishers want their titles played on as many different devices as possible at this point. Beyond Nintendo - and to a lesser extent, Sony - it's all about making games that address everything from Steam Deck through consoles upwards to the most powerful PCs at the top-end.

The question is how well that scalability is accomplished and the dedication to getting it right for the most amount of users. Going back to a "classic" triple-A game that struggled on 8GB GPUs, The Last of Us Part 1 was perfectly playable on VRAM-constrained graphics cards, but you had to use medium quality textures. The problem wasn't viability as such, it was about quality. Medium textures just looked really bad. This was improved over time but was clearly an oversight at launch. Then there was Stellar Blade. High textures stuttered badly on 8GB cards, while medium textures were inconsistent: a mixture of impressive console-quality assets and massively pared back art within the same scene that didn't hit the mark.

I don't think getting games to run on lower-end PC with 8GB GPUs is the problem and I don't think developers need to scale back their game designs. But I do think there's an argument for "best practices" to ensure that at least console-quality experiences are achievable on 8GB GPUs, something we've discussed plenty of times in our PC coverage.

If publishers want to satisfy the maximum audience, they cannot simply write off 8GB users as legacy. And I think there is some evidence that improved efficiency is starting to happen. Modern engines and middleware increasingly revolve around smarter streaming and virtualisation of assets rather than brute‑force loading. Texture streaming systems continue to improve. Compression techniques - including more advanced formats and, longer‑term, machine‑learned upscaling of textures and other assets - offer a way to trade GPU cycles and CPU work for reduced memory footprints. Epic's ecosystem is a prime driver: Unreal Engine 5 is used for a huge slice of AAA development and improvements there in streaming and memory efficiency would propagate widely.

I'd say that it's hard to see developers changing their game design ethos to accommodate a RAM-constrained future, but we should see more care and attention going into improved support for scalability across the widest range of hardware possible where clearly 8GB cards are of crucial importance.

I also think this is predominantly a PC problem and consoles should emerge relatively unscathed. Current‑gen machines have fixed memory budgets and mature development environments. We're now deep enough into the cycle that most studios have settled into a visual and systemic target that fits these constraints. Sometimes the compromises can be problematic - particularly in terms of texture quality on some Xbox Series S games - but ultimately, a push towards optimising for 8GB PC graphics cards can only help the junior Microsoft console and indeed Switch 2.

But returning to the PC space, there does seem to be a definable hardware limit in running the latest and most challenging games. There was a time where discrete 6GB graphics cards were commonplace, but aside from the cut-back RTX 3050, RTX 2060 and RX 6400 and RX 6500 XT, cards with less than 8GB of VRAM tend to lack the modern feature support that benefits a lot of games. Unfortunately, the situation with laptops with discrete GPUs is somewhat murkier. By contrast, 8GB GPUs can offer a solid experience if developers invest in asset tuning, smarter streaming, and sensible presets.

Going forward, any shift in the status quo will likely come start with the arrival of new consoles. What we do know about these machines is that they are investing big in features like ray tracing, path tracing and especially machine learning. The existing 16GB standard as seen on PS5, PS5 Pro and Xbox Series X won't cut it unless Sony and Microsoft can somehow handle the hardware imbalance seen at a lesser scale with cards like the RTX 5060.

RAM crisis or not, I'm still expecting those machines to ship with 24GB of unified memory at a minimum. Target profiles may well rise and PC memory requirements could follow in lock step. 8GB GPUs may start to fall off in support and the 16GB in PS5 becomes entry-level.

For now though, RAMageddon can be seen as a transitory pain point - but also an opportunity for developers to get to grips with the kind of practises they'll need to master as another lengthy cross-gen period starts to kick in. Of course, more capable hardware and more memory will deliver improved experiences - but that doesn't mean that older kit with less memory shouldn't receive a good amount of attention for the most optimal experience possible.