This phenomenon mainly appears in the ryzen APU double roast.
It is said that there is a problem with the IF bus and CCX architecture, the data demand across cpuGpus (mainly Gpus) under dual baked high load is too full, so the if bus is jammed.
If bus and CCX architecture are generally the product of the idea of “low-cost heap core”, which is almost a perfect solution for pure CPU products. It is as follows: the four cores inside CCX are directly connected, and the communication between CCX and CCX is connected through the IF bus. The memory controller is also hung on the IF bus. AMD data is the current if bus bandwidth is 92G, and the general household products are about 50G.
In fact, CPU demand for bandwidth is not high, you see THE most high-end AMD mainstream platform 3950X double channel DDR4 50G level bandwidth, there is no obvious bottleneck. The bad thing is that AMD has not designed a bus for APU products, and has directly attached the GPU to the IF bus.
Players with a bit of hardware knowledge know that GPU’s thirst for bandwidth is simply as much as they can eat. 750Ti’s video memory bandwidth is around 90G. When roasting chicken, single APU core can easily eat up the 50G IF bus, coupled with the DATA demand of CPU part, it is unbearable.
Intel side is different, home iU is Ringbus Ringbus directly attached to all core + core display + memory controller, this structure is higher than the next CCX module bandwidth demand, so Intel to Ringbus bandwidth more than 100G, coupled with the collocation of weak core display bandwidth demand is low, no traffic jam will not card.
On the other hand, AIDA64+FurMark is not possible under normal game load, it’s too extreme, and APU is only in the mid and low end of AMD’s product line. It’s unrealistic to expect APU to design a bus for this product alone.