Why Are Modern PC Games Using So Much VRAM?

Even if you’re just a casual follower of tech news, you can’t have missed that the amount of video memory (VRAM) on a graphics card is a hot topic right now. We’ve seen indications that 8GB may not be sufficient for the latest games, especially when played at high resolutions and with graphics quality set to high or maximum. AMD has also been emphasizing that its cards boast significantly more RAM than Nvidia’s, and the latter’s newest models have been criticized for having only 12 GB.

So, are games really using that amount of memory, and if so, what exactly is it about the inner workings of these titles that require so much? Let’s lift the hood of modern 3D rendering and take a look at what’s really happening within these graphics engines.

Games, graphics, and gigabytes

Before we delve into our analysis of graphics, games, and VRAM, it might be worth your time to quickly review the basics of how 3D rendering is typically done. We’ve covered this before in a series of articles, so if you’re not sure about some of the terms, feel free to check them out.

We can summarize it all by simply stating that 3D graphics is ‘just’ a lot of math and data handling. In the case of data, it needs to be stored as close to the graphics processor as possible. All GPUs have small amounts of high-speed memory built into them (a.k.a. cache), but this is only large enough to store the information for the calculations taking place at any given moment.

There’s far too much data to store all of it in the cache, so the rest is kept in the next best thing – video memory or VRAM. This is similar to the main system memory but is specialized for graphics workloads. Ten years ago, the most expensive desktop graphics cards housed 6 GB of VRAM on its circuit board, although the majority of GPUs came with just 2 or 3 GB.

No shortage of cache and RAM on these graphics cards

Today, 24 GB is the highest amount of VRAM available, and there are plenty of models with 12 or 16 GB, though 8 GB is far more common. Naturally, you’d expect an increase after a decade, but it’s a substantial leap.

To understand why it’s so much, we need to understand exactly what the RAM is being used for, and for that, let’s take a look at how the best graphics were created when 3 GB of VRAM was the norm.

Back in 2013, PC gamers were treated to some truly outstanding graphics – Assassin’s Creed IV: Black Flag, Battlefield 4 (below), BioShock Infinite, Metro: Last Light, and Tomb Raider all showed what could be achieved through technology, artistic design, and a lot of coding know-how.

The fastest graphics cards money could buy you were AMD’s Radeon HD 7970 GHz Edition and Nvidia’s GeForce GTX Titan, with 3 and 6 GB of VRAM apiece, and price tags of $499 and $999 respectively. A really good monitor for gaming might have had a resolution of 2560 x 1600, but the majority were 1080p – the bare minimum, these days, but perfectly acceptable for that period.

To see how much VRAM these games were using, let’s examine two of those titles: Black Flag and Last Light. Both games were developed for multiple platforms (Windows PC, Sony PlayStation 3, Microsoft Xbox 360), although the former appeared on the PS4 and Xbox One a little after the game launched, and Last Light was remastered in the following year for the PC and newer consoles.

In terms of graphics requirements, these two games are polar opposites; Black Flag is a fully open-world game with a vast playing area, while Metro: Last Light features mostly narrow spaces and is linear in theme. Even its outdoor sections are tightly constrained, although the visuals give the impression of a more open environment.

With every graphics setting at its maximum value and the resolution set to 4K (3840 x 2160), Assassin’s Creed IV peaked at 6.6 GB in urban areas, whereas out on the open sea, the memory usage would drop to around 4.8 GB. Metro: Last Light consistently used under 4 GB and generally varied by only a small amount of memory, which is what you’d expect for a game that’s effectively just a sequence of corridors to navigate.

Of course, few people were gaming in 4K in 2013, as such monitors were prohibitively expensive and aimed entirely at the professional market. Even the very best graphics cards were not capable of rendering at that resolution, and increasing the GPU count with CrossFire or SLI wouldn’t have helped much.

Dropping the resolution down to 1080p brings it closer to what PC gamers would have been using in 2013, and it had a marked effect on the VRAM use – Black Flag dropped down to 3.3 GB, on average, and Last Light used just 2.4 GB. A single 32-bit 1920 x 1080 frame is just under 8 MB in size (4K is nearly 32 MB), so how does decreasing the frame resolution result in such a significant decrease in the amount of VRAM being used?

The answer lies deep in the hugely complex rendering process that both games undergo to give you the visuals you see on the monitor. The developers, with support from Nvidia, went all-out for the PC versions of Black Flag and Last Light, using the latest techniques for producing shadows, correct lighting, and fog/particle effects. Character models, objects, and environmental structures were all made from hundreds of thousands of polygons, all wrapped in a wealth of detailed textures.

In Black Flag, the use of screenspace reflections alone requires 5 separate buffers (color, depth, ray marching output, reflection blur result), and six are required for the volumetric shadows. Since the target platforms, the PS3 and Xbox 360, had half of the total 8 GB of RAM available for the game and rendering, these buffers were at a much lower resolution than the final frame buffer, but it all adds up to the VRAM use.

The developers had similar memory restrictions for the PC versions of the games they worked on, but a common design choice back then was to allow graphics settings to exceed the capabilities of graphics cards available at launch. The idea was that users would return to the game when they had upgraded to a newer model, to see the benefits that progress in GPU technology had provided them.

This could be clearly seen when we looked at the performance of Metro: Last Light ten years ago – using Very High quality settings and a resolution of 2560 x 1600 (10% more pixels than 1440p), the $999 GeForce GTX Titan averaged a mere 41 fps. Fast forward one year and the GeForce GTX 980 was 15% faster in the same game, but just $549 in price.

So what about now — exactly how much VRAM are games using today?

The devil is in the detail

To accurately measure the amount of graphics card RAM being used in a game, we utilized Microsoft’s PIX – this is a debugging tool for developers using DirectX, that can collect data and present it for analysis. One can capture a single frame, and break it down into every single line of code issued to the GPU and see how long it takes to process it and what resources are used, but we were just interested in capturing RAM metrics.

Most tools that record VRAM usage actually just report the amount of local GPU memory allocated by the game and consequently the GPU’s drivers. PIX, on the other hand, records three – Local Budget, Local Usage, and Local Resident. The first one is how much video memory has been made available for a Direct3D application, which constantly changes as the operating system and drivers juggle it about.

Local resident is how much VRAM is taken up by so-called resident objects, but the value we’re most interested in is Local Usage. This is a record of how much video memory the game is trying to use and games have to stay within the Local Budget limit, otherwise, all kinds of problems will occur – the most common being the program halts momentarily until there’s enough budget again.

In Direct3D 11 and earlier, memory management was handled by the API itself, but with version 12, everything memory related has to be done entirely by the developers. The amount of physical memory needs to be detected first, then the budget set accordingly, but the significant challenge is ensuring that the engine never ends up in a situation where it’s exceeded.

And in modern games, with vast open worlds, this requires a lot of repeated analysis and testing.

For all the titles we examined, every graphics quality and details variable was set to its maximum values (ray tracing was not enabled), and the frame resolution was set to 4K, with no upscaling activated. This was to ensure that we saw the highest memory loads possible, under conditions that anyone could repeat.

A total of three runs were compiled for each game, with every run being 10 minutes long; the game was also fully restarted between the data sampling to ensure that the memory was properly flushed. Finally, the test system used comprised an Intel Core i7-9700K, 16GB DDR4-3000, and an Nvidia GeForce RTX 4070 Ti, and all games were loaded from a 1 TB NVMe SSD on a PCIe 3.0 4x bus.

The results below are the arithmetic means of the three runs, using the average and maximum local memory usage figures as recorded by PIX.

Leave a Reply

Your email address will not be published. Required fields are marked *