SFF.Network AMD Ryzen announced, detailed and available for pre-order TODAY !

Ever since AMD first announced the work that would later be branded as Ryzen, the company has been strategically and masterfully orchestrating a narrative of dramatic change and disruption to the staid status quo of consumer and enthusiast-grade processors. Today, however, AMD has built up this performance into a crescendo, by revealing their top-performing Ryzen AM4 CPUs today.

Read more here.
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
I'm a bit surprised that R7 performance at 1080p has led a lot of folks to say that the gaming performance is not great, rather than to say that the 1080p gaming performance is not great.

Testing at 1080p is valid insofar that it shifts the bottleneck to the CPU, and in particular the per-core performance. It's valuable context. But realistically, you shouldn't buy an eight-core CPU if you're gaming at 1080p anyway - and at higher resolutions the Intel advantage mostly falls away. So for the sorts of builds that Ryzen 7 makes sense for, you're not suffering from the weaker single core perf.

I want to see a test like this but using the highest clocked memory that each processor will support. Who uses 2400mhz DDR4 in their i7?

Ryzen is demonstrably and architecturally very sensitive to memory performance. My sense is that this is less the case from Intel's whole line-up.
 

K888D

SFF Guru
Lazer3D
Feb 23, 2016
1,483
2,970
www.lazer3d.com
When the CPU is the bottleneck in gaming (e.g @1080p) Ryzen does not perform as well as games prefer faster single threaded performance, but when the GPU is the bottleneck Ryzen can keep up, so with current tech and software Ryzen is a valid choice for gaming above 1080p, so long as you will be making use of Ryzen strengths in other areas, otherwise you may as well save a couple hundred bob and get an i5 or i7 for the same fps performance and less power usage/lower temps/quieter system.

But, what happens when GPUs move on with the next the generation in a year or 2? If the GPU is no longer the bottleneck at 2160p, will Ryzens more powerful multithreaded performance still be able to keep up vs faster single threaded cores?

Perhaps games will start being coded to make use of more threads and Ryzen will be the better choice, who knows?!
 
  • Like
Reactions: EdZ and Kwirek

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
https://www.techpowerup.com/231401/you-really-shouldnt-delidd-amds-ryzen-7-cpus

So basically Der8auer only achieved a 1°C and 3°C lower value for respectively the max core temperature and average temperature, after delidding, cleaning, applying Thermal Grizzly Conductonaut. So basically not worth it, especially considering the socket has a protrusion that's half a millimeter taller than a bare core and most coolers will not make good contact.
That was expected as cpu is soldered to cover on Ryzen (at least for now.:))..:)

Only lga 115x are using thermal paste between cpu and ihs...and that's where delid is useful..:D
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
When the CPU is the bottleneck in gaming (e.g @1080p) Ryzen does not perform as well as games prefer faster single threaded performance, but when the GPU is the bottleneck Ryzen can keep up, so with current tech and software Ryzen is a valid choice for gaming above 1080p, so long as you will be making use of Ryzen strengths in other areas, otherwise you may as well save a couple hundred bob and get an i5 or i7 for the same fps performance and less power usage/lower temps/quieter system.

This is the best way I've seen it put anywhere. I wish this was how people were describing it.
 
  • Like
Reactions: EdZ

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,943
4,952
That was expected as cpu is soldered to cover on Ryzen (at least for now.:))..:)

Only lga 115x are using thermal paste between cpu and ihs...and that's where delid is useful..:D
I was expecting more difference between original setup and bare-die, honestly. But not that I have a frame of reference, I don't know how well Intel's HEDT soldered CPUs perform in this area though.
 

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
well intel X99 cpu are soldered too, and works fine...delid is only useful for LN2 OC on this broadwell-e..:)
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Using PCIe lanes as inter-chip interconnects (branded 'Infinity Fabric', like Intel uses an effective PCIe X4 link branded as 'DMI') is going to give it some really bizarre performance characteristics on workloads that span more than a single die's directly accessible memory. A single-chip Naples package might need to be treated a bit like a quad-socket board.
To add to this, PCPer have dome some interesting testing, showing the latency impact of inter-CCX vs. inter-core latency within Ryzen (and presumably within Naples, as that also uses the same Infinity Fabric interconnect within the package).
Perhaps games will start being coded to make use of more threads and Ryzen will be the better choice, who knows?!
That is AMD's hope. The problem is getting developers on board: it didn't work for Intel with Itanium, and it didn't work for AMD with HSA. On the one hand, splitting a workload into more threads isn't quite as hard as switching to a new instruction set, or threading to the 'embarrassingly parallel' degree HSA demands; but on the other we've had a decade of quad-core CPUs in general availability and nearly as long with 8-thread capable CPUs (either 8-core or 4-core with SMT), and games have stubbornly remained utilising 2/3 cores for the most part. Some workloads just can't be split effectively. We've also seen in the smartphone industry and accelerated development from single core SoCs, to dual-core and quad-core, to quad-core, to six-core and eight-core, and the 'happy medium' most have settled on once heterogenous cores have been added to the mix is a pair of 'big' cores for high-perfomrance tasks and a pair of smaller cores for background tasks. I believe we may see desktop processors trending this way too: we already have a similar non-silicon implementation in 'turbo boost', with one or two cores clocking higher and the others clocking down, to make use of the available power budget. And as process technology continues to shrink, the power budget becomes more and more of the physical limit on performance (i.e. you can only shove so many electrons through a given die before Bad Things occur at the atomic level).
If I were to bet on it, I'd bet on in a few generations time (maybe after Ice Lake, or maybe with one more iteration on the Core microarchitecture to wring out the last bit of value) we'll see a CPU with a mix of two-four 'big' cores, and the GPU removed and replaced with a Xeon PHi-derived cluster of small x86 cores, the ghost of Larrabee walking again.
 
  • Like
Reactions: K888D and Phuncz

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,943
4,952
To add to this, PCPer have dome some interesting testing, showing the latency impact of inter-CCX vs. inter-core latency within Ryzen (and presumably within Naples, as that also uses the same Infinity Fabric interconnect within the package).
Interesting indeed. I wonder if the theory behind the Windows 10 scheduling issues were a real problem before a certain time. Microsoft might have pushed an update recently and maybe not all reviewers received these during testing. Though it would seem software manufacturers need to take into account when they would use 5 to 8 threads to make sure to keep the related threads consolidated on the two CCX'es as much as possible. But this would still cost performance if communication between the threads is dependent on latency.
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
To add to this, PCPer have dome some interesting testing, showing the latency impact of inter-CCX vs. inter-core latency within Ryzen (and presumably within Naples, as that also uses the same Infinity Fabric interconnect within the package).

That is AMD's hope. The problem is getting developers on board: it didn't work for Intel with Itanium, and it didn't work for AMD with HSA. On the one hand, splitting a workload into more threads isn't quite as hard as switching to a new instruction set, or threading to the 'embarrassingly parallel' degree HSA demands; but on the other we've had a decade of quad-core CPUs in general availability and nearly as long with 8-thread capable CPUs (either 8-core or 4-core with SMT), and games have stubbornly remained utilising 2/3 cores for the most part. Some workloads just can't be split effectively. We've also seen in the smartphone industry and accelerated development from single core SoCs, to dual-core and quad-core, to quad-core, to six-core and eight-core, and the 'happy medium' most have settled on once heterogenous cores have been added to the mix is a pair of 'big' cores for high-perfomrance tasks and a pair of smaller cores for background tasks. I believe we may see desktop processors trending this way too: we already have a similar non-silicon implementation in 'turbo boost', with one or two cores clocking higher and the others clocking down, to make use of the available power budget. And as process technology continues to shrink, the power budget becomes more and more of the physical limit on performance (i.e. you can only shove so many electrons through a given die before Bad Things occur at the atomic level).
If I were to bet on it, I'd bet on in a few generations time (maybe after Ice Lake, or maybe with one more iteration on the Core microarchitecture to wring out the last bit of value) we'll see a CPU with a mix of two-four 'big' cores, and the GPU removed and replaced with a Xeon PHi-derived cluster of small x86 cores, the ghost of Larrabee walking again.
Back when Zen was first announced, AMD also announced they were working on an ARM design as well, and tbere were runors at the time AMD mught try to pursue a heterogeneous CPU that mixes x86 and ARM cores to handle dofferent workloads.

I dont really know what happened with that other than the ARM stuffngot pushed back in favor of Zen, but with the push by Microsoft to support ARM and x86 on the same software platform, it does seem to be more viable now.

As for GPU cores, I'm hoping as mkre devs switch to DX12 we might see more stuff written to utilize the iGPU, and with more CPUs with GPUs on them Id also like to see GPGPU become more common.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I don't think adding ARM cores is going to work, at least not for the desktop, for the same reason that GPGPU remains as something either happens on a dedicated high-performance card or not at all: you need to write code to specifically take advantage of it, and that's only worth it if you get a big payoff for doing so. It would prevent any program being able to transition between a 'low' and 'high' performance core due to the two having different instruction sets. Using small x86 cores has the advantage of - for the most part - already being compatible with existing programs. Just taking an existing thread and shoving it onto a slow x86 core isn't he most elegant or efficient route, but it's one that can be retrofitted to existing applications.
 

|||

King of Cable Management
Sep 26, 2015
775
759
They went back to x86 but still use ARM for a secure processor within the processor.



You won't be able to use the ARM processor for other purposes.
 
  • Like
Reactions: Ceros_X and Phuncz

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
Very interesting video that explaining 2 points :
  • Windows 10 knows to use physical core or ryzen on priority...no issue on smt so
  • Communication between ccx on ryzen is really really long, and only a dedicated program can solve that
 
  • Like
Reactions: EdZ

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
http://www.hardware.fr/articles/956-24/retour-sous-systeme-memoire-suite.html
Here is another article (in french) making deeper analysis on CCX architecture.

Similar conclusion, windows scheduler update won't improve that much (even game mode, that could have also other negative effect).

Strangely, I feel a bit dazed with all these analysis, that demonstrate Intel cache and core manegement are easier to use.

Good point is that time I received my Cerberus-X, maybe X299 will be released..:D (I hope not..:))
 
  • Like
Reactions: EdZ

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
http://www.hardware.fr/articles/956-8/retour-smt-mode-high-performance.html
This time, it's a windows 10 update that is required. AMD SMT is truly good, issue is on "Core parking" function from Windows 10. AMD made a bad communication stating it's mandatory to use "high performance" mode instead of "balanced". Why? Because "High performance" mode is disabling "core parking" mainly. (but "high performance" has down sides like disabling cool and quiet (cpu is always on its base clock ie 3.7ghz..:()

Issue on windows 10 is simple :
  • by default on AMD Ryzen, in balanced mode, core parking is set to reduce activity on 90% of threads..:)
  • by default on X99 cpu, in balanced mode, core parking is set to reduce activity on 0% of threads..:)
To disable "core parking" on windows 10, just use this software : Park Control
Results are pretty direct and easy to read.:D (AMD SMT is working fine, core parking is horrible..:))


Thanks to hardware.fr to go really deeply in their analysis.
 

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
Another interesting point about Ryzen 7 AMD TDP (or so like).
http://www.hardware.fr/articles/956-10/consommation-efficacite-energetique-tdp.html
To make it short :
  • AMD and INTEL TDP are different. Intel is more reliable compared to real values
  • In INTEL TDP, here are R7 values :
    • R7 1800X/1700X : 128W INTEL TDP (95W AMD TDP)
    • R7 1700 : 90W INTEL TDP (65W AMD TDP)
This point would lead clearly on a blocking point on mini ITX build where Intel TDP 65W or lower are the best suitable cpu.
 
Last edited:

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
Wasn't core parking also an issue when Bulldozer first came out?
I thought they would have learned from last time.
 

MarcParis

Spatial Philosopher
Apr 1, 2016
3,669
2,784
Wasn't core parking also an issue when Bulldozer first came out?
I thought they would have learned from last time.
Nope on bulldozer, issue was in windows didn't know if it was a real core or a thread. Ryzen and windows 10 don't have this issue. Windows 10 is putting priority on physical cores first...but on ryzen, now, windows or program needs to drive cpu workload considering also CCX, to avoid any cache high latency.

Core parking is another function to try to avoid to wake up core in sleep mode.
 

alexep7

Cable-Tie Ninja
Jan 30, 2017
184
139
Another interesting point about Ryzen 7 AMD TDP (or so like).
http://www.hardware.fr/articles/956-10/consommation-efficacite-energetique-tdp.html
To make it short :
  • AMD and INTEL TDP are different. Intel is more reliable compared to real values
  • In INTEL TDP, here are R7 values :
    • R7 1800X/1700X : 128W INTEL TDP (95W AMD TDP)
    • R7 1700 : 90W INTEL TDP (65W AMD TDP)
This point would lead clearly on a blocking point on mini ITX build where Intel TDP 65W or lower are the best suitable cpu.
that has always been the case, I was hoping it would be different this time...oh well :( I hope Raven Ridge 65W APUs aren't that bad