Ah yea... tech youtubers (i can barely tolerate more than 2~3 of them).
He looks only at released 2020 games, not counting previous years. And with consoles now sporting RT, it'll inevitably promote a lot more support of ray tracing on PC in upcoming games. I doubt we'll see many games going full path tracing ala minecraft unless Nvidia partners with a dev, but it'll still be better than before. Also DLSS is not
needed for everything, i mean you can only care about rasterization performances until which point? That you need a 360Hz monitor to run Doom Eternal? DLSS is needed (and very much) on RT intensive games, because it makes the difference between unplayable to playable.
So the narrative that keeps coming up to be ready for new generation of console games is :
"10GB won't cut it, 16GB is safe", but also
"Look at
previous years of RT support in games, ain't worth it" ?
Which.. is kind of dumb. To begin with, consoles are moving away from brute forcing VRAM usage like previous generations with decoders/DirectStorage and SSDs. Rather than keep like 30 seconds of dataset that are idle and waiting to be used in the level, you can literally keep only 1 or 2 seconds of assets required to stream what players see. As Mark Cerny (PS5) showed in the slides, SSDs are memory extensions while the VRAM now acts like a buffer almost, keeping only immediate assets and keeping idle data at a minimum.
The only way 10GB is a problem is if devs put a stupid amount of asset in memory idling without any IO logic, he'll also tank console performances anyway so yea..
Also Tech powerup found that even @ 1080p & 1440p, on average, the 3080 is still ahead
So i'm here, scratching my head about all these claims that 3080 is only for 4k..
There's also a big paradigm shift in console, probably even bigger than ray tracing and it's machine learning. The thing is as of now, there's no way to truly benchmark these features until we start seeing directML features. Here's a good article no the subject.
I have a unique perspective on the recent nVidia launch, as someone who spends 40 hours a week immersed in Artificial Intelligence research with accelerated computing, and what I’m seeing is that it hasn’t yet dawned on technology reporters just how much the situation is fundamentally changing with
technoodle.tv
"If I saw a competing card released tomorrow which heavily outperformed the GeForce 3080 in current-gen games, it would actually set my alarm bells ringing, because it would mean that the competing GPU’s silicon has been over-allocated to serving yesterday’s needs."
Nvidia went all the way into tensor cores for ampere. They have lower number of RT cores than Turing, while tripling the Tensor OPs. They did not sacrifice that amount of silicon area just for DLSS 2.0. The question now is, what is Nvidia planning for tensor cores next year?
AI texture upscaling? ,
AI texture compression? ,
AI physics? , or a brand new upscaler?
I mean who knows. But all these youtubers are not participating in this discussion of what upcoming console games will require with RT and machine learning support, just VRAM and old game rasterization performances..