Sony's gaming-hardware collaboration with chipmaker AMD is poised to continue for the foreseeable future, with the firms announcing a PlayStation-linked initiative dubbed "Project Amethyst" in late 2024. Their most recent update on the joint effort sounds a lot like the foundational GPU technology for the next PlayStation console, and it includes three macro-level technology tentpoles that haven't yet been seen in a Sony console.

PlayStation 5 Lead Architect Mark Cerny offered to answer Digital Foundry's questions shortly after the new Sony video went live on October 9, and the resulting email interview has informed our analysis, posted in our own video below.

We explore the intriguing possibilities of Neural Arrays (a more cohesive management of compute units for machine learning-specific tasks), Radiance Cores (which sound like an AMD equivalent of Nvidia's "RT Cores" on RTX-branded GPUs), and Universal Compression (a technique to improve GPU memory bandwidth). While Cerny's answers inform some of our takes in the video, we're additionally choosing to print our Q&A in its entirety below. Think of it as a read-along to enjoy while you take in our context and interpretation, so that you might join us in reading Sony and AMD's GPU-future tea leaves.

Sony's initial Project Amethyst announcement came nearly a year ago, and it described a partnership where AMD leads on the hardware side while Sony primarily contributes machine learning R&D. From the sound of Cerny's chat with AMD Senior VP Jack Huynh, that collaboration sounds quite fruitful, and the duo continues to insist that the fruits of Project Amethyst will find their way into general-purpose AMD GPUs (read: not just PlayStation hardware).

As part of Project Amethyst, Sony and AMD are putting final touches on a version of FSR 4 that will improve the PlayStation 5 Pro's PSSR implementation, slated to arrive on that console sometime in 2026.

DF: Do you think of the leap from PS5 Pro to upcoming hardware as more of a progression, or a step-change moment in how developers make games?

Mark Cerny: We’ve shifted our focus substantially. In the past, we were largely creating custom technologies just for PlayStation platforms. But now with Project Amethyst, we’re placing substantially more of a focus on co-engineering and co-development with AMD on their roadmap hardware and libraries.

That change will have a very large impact, because when developers can create their technology with the understanding that it will work across multiple platforms such as desktops, laptops, consoles, etc., there will be much larger pickup of new features. So I believe we’ll see outsized impact from the technologies announced this past week!

DF: For PS5 Pro, your explicit machine learning target was to accelerate lightweight CNNs for game graphics. For next-generation hardware, how much broader is your machine learning [ML] remit? Can you envisage giving developers direct low level access to the ML hardware?

Mark Cerny: There are many types of ML, and as a result, multiple toolchains will be needed – small models need an ONNX [Open Neural Network Exchange] graph compiler, LLMs [large language models] need a more specialized toolchain, and when ML is integrated into a pixel shader (for applications like neural textures), yet another kind of support is needed. To directly answer your question, though, I think over time we’ll learn how important it is to be “close to the metal” on this particular architecture.

DF: You were famously surprised by the uptake of ray tracing in early PS5 titles - do you see the same thing happening with machine learning in future hardware?

Mark Cerny: What will machine learning even be in a few years’ time? I think we’re all guaranteed to be surprised. Having a powerful and flexible architecture for ML is definitely going to help with the challenges ahead.

DF: Can you tell us more about which characteristics of data headed to a GPU can universal compression exploit? After all, the things a GPU can process are diverse: textures, geometry, 3D textures, voxel bricks, etc.

Mark Cerny: With these technologies, we know that great stuff is coming, but it’s difficult (or impossible) to quantify. For Universal Compression, I have high hopes for synergies with ML, for example, FSR and PSSR are recurrent neural networks that write feature maps to system memory – how well will those feature maps compress?

Radiance Cores are similar; they are obviously a great way to do the processing, but I suspect we’ll need to get prototypes into the hands of game developers to understand the degree to which they will be able to “level up” their engines.

DF: Is there also a hardware-based need for universal compression in a world where memory controllers don't scale particularly well to smaller process nodes?

Mark Cerny: You have indeed spotted the win! Bandwidth comes with a very high cost, so if there’s a way to make it more efficient then the gains can be quite large.

DF: With this strong focus on machine learning innovation, where does rasterisation performance sit on your list of priorities for future hardware?

Mark Cerny: Conventional rasterisation is of course very important too, and we will seek out improvements there where possible. But I think we can all agree that the step change will come from ray tracing and machine learning capabilities!