Video memory is so strong why not use the CPU directly?

RTX 3060 or high-end graphics card, look at 1xGB or 2xGB 1xGHz video memory, and then look at their memory capacity and frequency, is it not a bit unbalanced? Video memory is so powerful, why don't computers just use video memory? Let's talk about it today.

In fact, today's GDDR video memory and DDR memory are related to each other. The two sides parted ways in the DDR3 era, and GDDR has been rapidly updated with the rapid development of GPU, gradually becoming more and more different from DDR memory. One thing we should know is that GDDR3/GDDR4 video memory can actually be seen as the same generation as DDR3, so the later GDDR video memory algebra has a certain "virtual standard", and technically does not exceed the memory two steps so much.

In terms of hardware and external specifications, GDDR modifies the external interface to provide more bandwidth with fewer chips. For example, video memory with hundreds of bits of bandwidth requires only a few chips, while 64-bit memory has several or a dozen chips. GDDR also dramatically reduces power consumption, allowing it to skip memory and sprint to higher frequencies above 10GHz.

But, here's the thing: why can't GDDR counter the DDR market? The problem is the delay. As a specialized GPU support component, GDDR is optimized for the way it works, which is to pursue high bandwidth, but therefore has to make trade-offs, thus sacrificing latency. As for the video memory latency? It was tested on the overseas chipsandCheese website.

Video memory is so strong why not use the CPU directly?

Compare AMD RX 6800 XT with 256bit 16Gbps GDDR6 and NVIDIA RTX 3090 with 384bit wide 19.5Gbps GDDR6X. The actual latency measured is a bit scary, starting at more than 20ns, and even large file access has reached more than 250ns. Some people probably think that the memory latency is not as good as it is claimed, so let's take a look.

Even when compared to the outdated i7-4770+DDR3 1600, the memory latency of the latest flagship graphics card is nothing to look at. Although considering that GPU values more stable, continuous and predictable data flow and less random data, the video memory is basically enough. RX 6000's Infinity Cache is the solution, NV GPU also has Cache, RTX 20's latency is better, but the capacity is obviously not enough, so it loses to the A card with lower frequency and bit width in the previous comparison.

Therefore, the direct use of video memory chips in today's mainstream computers is very problematic. Wait, why does xiaobian add so many attributives? This is because there are some examples of using video memory directly on the CPU, such as the Xbox Series or PS5 console, which is on sale now. That's because their main job is graphics processing, and cpus not only have a lower status, they don't have a lot of random working data like PCS do.

As for the future, judging by the performance of the RX 6000 series, the use of GDDR class memory particles is also "promising" if cpus can add hundreds of MEgabytes of cache. It's just that none of the CPU generations we've seen have had this kind of design, and the Ryzen 3D is just a few steps down the road, with less cache capacity than the high-end RX 6000, which is obviously not enough.

Shopping Cart