• Save 15% on ALL SFF Network merch, until Dec 31st! Use code SFF2024 at checkout. Click here!

GPU Geforce 20 series (RTX) discussion thread (E: 2070 Review unbargo!)

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Adoption may be quite a bit faster than that. Hardware T&L took off extremely rapidly, as did the introduction of pixel & vertex shaders, and the switch to unified shaders.
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
I'm not expecting 7nm (or 10nm or any similarly scaled process) to offer any dramatic performance gains - if any at all - same as with the last few scalings, or any real price gains due to dramatically increased process complexity (both SAQP and EUV have yet to be used in production, and everyone is having problems getting yields up on large dies). Future gains will come from process maturity and design improvements, gains from pure "fab the same thing but smaller" shrinkings vanished the better part of a decade ago.

That's true in CPUs because they are limited by max single-core clock, but GPU architectures and workloads are highly parallelized and benefit from shrinking the die almost linearly. Pascal went down to 14nm from 28nm and you saw the largest jump in performance in a long time, the 1070 was on par with the 980 Ti. This time they're only going from 14nm to 12nm -- and it's actually kind of bullshit, they're almost the same node -- so the only gains you're seeing is from the increased die size -- more CUDA cores -- and dedicated hardware acceleration for RT and AI. I think 7nm will be similar to Maxwell -> Pascal, i.e. a 3080 Ti that will be at least 50% better than the 2080 Ti. I could be wrong...

The Ray Tracing tech seems really promising, but for me it's hard to justify spending that kind of money just for that considering the 1080 I have is still serving me well. I'm interested in the performance gains in non-RTX scenarios as it hasn't really been addressed in the presentation.

Suspicious, wasn't it. My guess is they didn't show that because it's very underwhelming for the price they're asking. Probably around 15%, at most 20% in non-RT tasks. Nvidia's whole marketing strategy with this launch was to hype up RT as much as possible to justify the insane price tag and modest gains in shading due to the lack of die shrink/CUDA IPS. I'd keep the 1080 until the next generation.

Honestly guys, I just watched all the RT game demos and they are pretty underwhelming. Yeah, the new lighting is cool, but it's far from some insane leap toward photorealism. The textures are still blatantly CGI, I'd be much more impressed by improvement in texture quality than lighting. I'm still happy about it, because the future will undoubtedly be 100% RT, but right now it's far from being a quantum leap in graphics. The most impressive demos they showed -- with the Iron Man-like dude and Star Wars both featured characters and environments made up of highly reflective surfaces, and with incredibly high quality of textures. The games looked much much worse...

Man, I typed a lot. I'm still excited for this launch despite all the issues.
 

Aichon

Average Stuffer
Oct 16, 2017
85
232
Adoption may be quite a bit faster than that. Hardware T&L took off extremely rapidly, as did the introduction of pixel & vertex shaders, and the switch to unified shaders.
Oh, yeah, that's entirely possible. Even so, let me move the goalposts a bit by asking how well the cards that debuted each of those technologies held up compared to the cards that followed them? If ray tracing adoption takes off like a wildfire (which is something I earnestly hope for!), how will the 20-series' ray tracing performance stack up in a few years? How long will it remain relevant?

With any technology this new, there's a lot of low-hanging fruit to be had in terms of performance gains. The 20-series is a 1000% improvement over the 10-series when it comes to ray tracing. I don't expect the same gains of the 21-series, but I do expect ray tracing to significantly outpace the 20-60% gains we've grown used to seeing these last few generations. People buying a 20-series (which admittedly may yet include me) on the basis of its ray tracing performance will be paying a steep early adopter tax on the bet that ray tracing will take off quickly enough for them to enjoy its benefits before the ray tracing performance of the 20-series is rendered obsolete by the rapid improvements we'll undoubtedly see in its successors.

Again, I'm looking at this through the lens of someone who bides his time and only upgrades once in a blue moon, so that colors my perspective rather dramatically. What I want is something that will hold up well. Coming at it from that angle, even if ray tracing takes off, someone like me may be better off just ignoring the 20-series' ray tracing capabilities altogether due to the high likelihood that they'll have a short useful lifespan before becoming irrelevant. At which point, all I really have left to consider is raster performance.

Of course, I wouldn't advise this line of thinking as any sort of general guidance, nor am I trying to convince anyone else of anything. The 20-series makes a lot of sense for plenty of people. I'm just working through whether I'm one of those people.
 

Biowarejak

Maker of Awesome | User 1615
Platinum Supporter
Mar 6, 2017
1,744
2,262
Right now I'm curious how the RT stuff plays with G-SYNC, seeing as such swings in FPS would be very rough without it. If they were anticipating this launch, and of course they were, maybe it's why the G-SYNC modules seemed so overbuilt.

Anyway, fair points everyone. Back to the shroud: I'm far more a fan of this look compared to previous. It reminds me of the 900-series stuff that EVGA put out, and since moved away from. Definitely a bummer we'll have to rely on AIB partners to create worthwhile blower designs though.

Back to performance: as far as I'm aware the thing to remember about process nodes is that the die size is exponentially impacted. So even a little leap can be a big one. It's tripped Intel up for years now.

Having the highest end of this lineup be so CUDA-heavy is also a really good selling point to anyone who is sitting on less than a thousand of them and likes to render stuff. So there's that too :p
 
  • Like
Reactions: loader963

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
This is a bit random, but I don't get these 2.75 slot cards. If you're gonna make it 3 slot, make it 3 slot. It's not like I can use the 0.25 slot. And they should all have 3 actual PCIe brackets to help keep those massive cards from sagging. A proper 3-slot card would work great in the NCase M1, almost like having its own Accelero. As long as it's not too long of course.

I also wonder if the FE will have 0 RPM mode. I'll be disappointed if it doesn't, I hate fans spinning uselessly (unless it's for dust control).
 
Last edited:

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
https://www.anandtech.com/show/13261/hands-on-with-the-geforce-rtx-2080-ti-realtime-raytracing

It's pretty much a shit show. They ran BFV at 1080P on a 2080ti to show off RT. RT a huge resource drain.

Yeah... The more I read the more I lean toward canceling my 2080 Ti preorder. It just doesn't seem worth it all things considered. At least Nvidia lets you cancel before the item is shipped, so I have until Sept 20th to decide. Hopefully we'll get more benchmarks by then.
 

Thehack

Spatial Philosopher
Creator
Mar 6, 2016
2,813
3,670
J-hackcompany.com
Yeah... The more I read the more I lean toward canceling my 2080 Ti preorder. It just doesn't seem worth it all things considered. At least Nvidia lets you cancel before the item is shipped, so I have until Sept 20th to decide. Hopefully we'll get more benchmarks by then.

RT is the future but there's a reason why game engines "cheat."

I would say a better use of all this power is voxel based illumination, which is lighting by calculating voxel and giving that voxel a value of "light." easier to implement and much more scalable.

https://developer.nvidia.com/vxgi

Other things I'd like to see is more geometry, volumetric fauna, better AI, destruction. Those provide a much bigger benefit at a lower resource cost.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
The textures are still blatantly CGI, I'd be much more impressed by improvement in texture quality than lighting.
Improved lighting is what is going to improve 'texture quality'.
PBR was a start, but requires a precipitous stack of effects passes to get decent looking results out of it (point lights, area lights, global lights, contact hardened shadows, screen-space ambient occlusion, specular passes (per light), cubemaps, etc...) to approximate the results of what raytracing achieves directly. 'Add more hacks' isn't going to be of much value, as the remaining cases where the current techniques are insufficient (transparency, reflections, etc) require massive increases in performance. Real-time accurate reflections for example require dynamic cubemaps, which would mean adding an extra 6 viewports per scene (to render the 6 cube faces, due to the limitations of rectilinear rendering) which for a geometry-limited scene would require a 7x speedup. Even a single flat planar reflection (mirror) requires an additional viewport and doubling of performance, which is why vanishingly few games have added real-time reflections for mirrors for the past decade or so (and the few that have do so with very specific hacks in specific circumstances, like 'reflecting' just the player model by making the mirror a hole and adding a scene of the room with the player in behind it).
 

Thehack

Spatial Philosopher
Creator
Mar 6, 2016
2,813
3,670
J-hackcompany.com
Improved lighting is what is going to improve 'texture quality'.
PBR was a start, but requires a precipitous stack of effects passes to get decent looking results out of it (point lights, area lights, global lights, contact hardened shadows, screen-space ambient occlusion, specular passes (per light), cubemaps, etc...) to approximate the results of what raytracing achieves directly. 'Add more hacks' isn't going to be of much value, as the remaining cases where the current techniques are insufficient (transparency, reflections, etc) require massive increases in performance. Real-time accurate reflections for example require dynamic cubemaps, which would mean adding an extra 6 viewports per scene (to render the 6 cube faces, due to the limitations of rectilinear rendering) which for a geometry-limited scene would require a 7x speedup. Even a single flat planar reflection (mirror) requires an additional viewport and doubling of performance, which is why vanishingly few games have added real-time reflections for mirrors for the past decade or so (and the few that have do so with very specific hacks in specific circumstances, like 'reflecting' just the player model by making the mirror a hole and adding a scene of the room with the player in behind it).

Voxel global illumination is a better method of handling GI without causing massive performance hint. A 2080 ti running at 1080P/144hz is outrageous just for some shadows and reflections. And even then, it is mostly useful for reflections, which not every game is relevant. You won't see a whole lot of reflections on tomb raider or something like uncharted.

You can improve texture quality by using higher resolution texture with actual bump mapping. Cheaper than RT or tesselation or modeling.

Here are some ways to get more fidelity without completely killing your frame rate:

1. Fauna that isn't just a 2d texture on a 2d polygon. Actual volumetric polygon.

2. Procedurally generated assets to where not everything is copy and paste. Repeating textures, polygons, etc none of this is represented in real life. Everything is imperfect. Walls aren't always 90 degrees and our mind can definitely tell.

3. Particles and physics. See physx failure due to being proprietary tech and hard to implement.

4. Voxel illumination, cheaper ambient occlusion and global illumination without killing your fps.

5. Hardware accelerated sound stage and physics, using modeledH RTF.

6. Better AI.

7. High res texture.

8. Dynamic resolution.

Seeing the demos and how much in tanks fps I feel like RT is just gimmick. I hope to be proven wrong though. I'm sure at some point RT will take over raster but for now the performance hit and cost is too much for the average buyer.
 

Duality92

Airflow Optimizer
Apr 12, 2018
307
330
Voxel global illumination is a better method of handling GI without causing massive performance hint. A 2080 ti running at 1080P/144hz is outrageous just for some shadows and reflections. And even then, it is mostly useful for reflections, which not every game is relevant. You won't see a whole lot of reflections on tomb raider or something like uncharted.

You can improve texture quality by using higher resolution texture with actual bump mapping. Cheaper than RT or tesselation or modeling.

Here are some ways to get more fidelity without completely killing your frame rate:

1. Fauna that isn't just a 2d texture on a 2d polygon. Actual volumetric polygon.

2. Procedurally generated assets to where not everything is copy and paste. Repeating textures, polygons, etc none of this is represented in real life. Everything is imperfect. Walls aren't always 90 degrees and our mind can definitely tell.

3. Particles and physics. See physx failure due to being proprietary tech and hard to implement.

4. Voxel illumination, cheaper ambient occlusion and global illumination without killing your fps.

5. Hardware accelerated sound stage and physics, using modeledH RTF.

6. Better AI.

7. High res texture.

8. Dynamic resolution.

Seeing the demos and how much in tanks fps I feel like RT is just gimmick. I hope to be proven wrong though. I'm sure at some point RT will take over raster but for now the performance hit and cost is too much for the average buyer.

This, no one is gonna buy a 2080 Ti to then buy a 1080/60 monitor, it's just not there yet.
 
  • Like
Reactions: Phuncz

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Voxel global illumination is a better method of handling GI without causing massive performance hint.
VXGI is an extra member of the stack of raster lighting techniques, i.e. something you use in addition to[ the current assemblage of hacks.
You can improve texture quality by using higher resolution texture with actual bump mapping. Cheaper than RT or tesselation or modeling.
Texture res remains in a tradeoff with texture uniqueness (i.e. if you make texture resolution higher, you can have fewer unique textures). Bump mapping (plus normal mapping, plus specular mapping, plus a whole bunch of other pass-specific maps) has been standard for a long time.
1. Fauna that isn't just a 2d texture on a 2d polygon. Actual volumetric polygon.
Adding more polys to a scene is another tip in favour of raytracing. For raster stepping, adding more polygons increases total scene complexity whole RT scales with rendered pixel count and poly limit is down to memory space and bandwidth.
2. Procedurally generated assets to where not everything is copy and paste. Repeating textures, polygons, etc none of this is represented in real life. Everything is imperfect. Walls aren't always 90 degrees and our mind can definitely tell.
Procedural generation still require memory space (can't dispaly a texture you can't load), so trades off disc space requirements more than GPU vRAM requirements. Not haivng your walls meet at 90° is unrelated to this.
[4. Voxel illumination, cheaper ambient occlusion and global illumination without killing your fps.
This is the state we're in today, but refining these effects is only going to get more computationally expensive, not less. The low hanging fruits have been plucked, all that's left are the harder cases (reflective objects, lighting interreflecance).
[5. Hardware accelerated sound stage and physics, using modeledH RTF
VR is pushing a resurgence in sound modelling engines. HRTFs are already pretty common though often unused (ifg you motherboard has built in audio it very likely has a HRTF model) but that last-stage filtering is the easy part.
6. Better AI.
Everyone wants this, but this falls into "make the image look better" in vagueness.
[8. Dynamic resolution.
Already a standard technique, used for easily decades (e.g. the original WipEout on PS1) with the rendered resolution changing per-frame across the frame based on performance. Maxwell brought in Multi-Res Shading, and Pascal Lens Matched Shading (doing the same but without explcit multiple draw calls) that split the display into chunks to be rendered at different resolutions, specifically for VR where the presence of optics in front of the panel changes pixel distribution. Turing has added dynamic resolution maps to vary this across the frame without needing to split into different viewports, another benefit of raytracing (and for VR specifically, allows for non-rectilinear rendering as nobody uses rectilinear optics).


Pretty much all the "we don't need raytracing, just add X!" techniques have already been added, which is why raytracing is being contemplated in the first place.
 

Thehack

Spatial Philosopher
Creator
Mar 6, 2016
2,813
3,670
J-hackcompany.com
VXGI is an extra member of the stack of raster lighting techniques, i.e. something you use in addition to[ the current assemblage of hacks.

Texture res remains in a tradeoff with texture uniqueness (i.e. if you make texture resolution higher, you can have fewer unique textures). Bump mapping (plus normal mapping, plus specular mapping, plus a whole bunch of other pass-specific maps) has been standard for a long time.

Adding more polys to a scene is another tip in favour of raytracing. For raster stepping, adding more polygons increases total scene complexity whole RT scales with rendered pixel count and poly limit is down to memory space and bandwidth.

Procedural generation still require memory space (can't dispaly a texture you can't load), so trades off disc space requirements more than GPU vRAM requirements. Not haivng your walls meet at 90° is unrelated to this.

This is the state we're in today, but refining these effects is only going to get more computationally expensive, not less. The low hanging fruits have been plucked, all that's left are the harder cases (reflective objects, lighting interreflecance).

VR is pushing a resurgence in sound modelling engines. HRTFs are already pretty common though often unused (ifg you motherboard has built in audio it very likely has a HRTF model) but that last-stage filtering is the easy part.

Everyone wants this, but this falls into "make the image look better" in vagueness.
Already a standard technique, used for easily decades (e.g. the original WipEout on PS1) with the rendered resolution changing per-frame across the frame based on performance. Maxwell brought in Multi-Res Shading, and Pascal Lens Matched Shading (doing the same but without explcit multiple draw calls) that split the display into chunks to be rendered at different resolutions, specifically for VR where the presence of optics in front of the panel changes pixel distribution. Turing has added dynamic resolution maps to vary this across the frame without needing to split into different viewports, another benefit of raytracing (and for VR specifically, allows for non-rectilinear rendering as nobody uses rectilinear optics).


Pretty much all the "we don't need raytracing, just add X!" techniques have already been added, which is why raytracing is being contemplated in the first place.

They've been added but little to no implementation.

For example I only know of one game that tried voxel illumination. No EA game I've seen have engine based HRTF, the motherboard is fake stuff, as the headphone based solution. It uses the 5.1 signal, puts the player in the center and generates a fake hrtf signal.

For pc, the dynamic resolution is not implemented in a major game, at least not one that I've seen advertised on PC. Xbox and Ps4 regularly does this.

For texture and asset heavy stuff, it is cheaper to double up the Ram than charging us $1200 for a chip that runs RT on a 1080/144 monitor.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
engine based HRTF
HRTF is applied to the downmixed stream, the engine is what needs to handle spatial audio (e.g. environmental absorption/reflectance modelling).
the motherboard is fake stuff, as the headphone based solution
It is no more 'fake' as any other HRTF. HRTF is literally just a transfer function (Head Related Transfer Function) applied to a signal. Don't mix it up with environmental audio simulation (which can be done completely separately from HRTF, e.g. if you want to output to discrete speakers).
For pc, the dynamic resolution is not implemented in a major game, at least not one that I've seen advertised on PC. Xbox and Ps4 regularly does this.
Titanfall 2, Gears 4, Path of the Exile, Forza 6, Dishonored 2, Redout, one of the Assassins Creed games uses it, whole piles of VR games do it, Unreal Engine has it as an available option, etc. HiAlgo can retrofit it to existing games but only works kinda-OK at best. It's also a technique that benefits on pixel-limited scenes, but not if the bottleneck is some other situation (e.g. geometry, texture loading, on-GPU physics, etc).
For texture and asset heavy stuff, it is cheaper to double up the Ram than charging us $1200 for a chip that runs RT on a 1080/144 monitor.
RAM is extremely expensive at the moment due to massive demand and long lead times to set up new fabs*. This is why prices across the board remain high, demand is outstripping supply. It also hits the Catch 22 of only being of benefit when massivew textures are used, and nobody would ship massive textures without GPUs able to load them. On top of that, increasing texture resolution only benefits fidelity at extremely short distances: as soon as you get nay further away, you hit the next MIPmap level and the texture resolution used drops (out of necessity to avoid sampling artefacts) and you're back down to the same visual quality as everyone else. Or in other words: increasing texture size can be though of as adding an extra level 'on top of' an existing MIPmap.

* Consider the common conspiracy theories that manufacturers are sitting on piles of unsold cards (often just attributed to the 10xx series, despite all cards seeing the same price trends), and that prices are being kept massively higher to inflate margins. The first manufacturer to drop prices down to 'normal' margins would see a massive sales increase compared to everyone else and would quickly be able to sell off their supposed 'excess' stock.
 

VegetableStu

Shrink Ray Wielder
Original poster
Aug 18, 2016
1,949
2,619

holy crap, if the next generation also had founders edition cards like the 20-series, I'm going to buy straight from Nvidia O_O
 

tinyitx

Shrink Ray Wielder
Jan 25, 2018
2,279
2,338
holy crap, if the next generation also had founders edition cards like the 20-series, I'm going to buy straight from Nvidia O_O
Yes, this is their vapour chamber heatsink. And the FE is factory overclocked by 90MHz too. Looks like Nvidia wants to position the FE not more as a plain vanilla version.
 
  • Like
Reactions: VegetableStu

Thehack

Spatial Philosopher
Creator
Mar 6, 2016
2,813
3,670
J-hackcompany.com
HRTF is applied to the downmixed stream, the engine is what needs to handle spatial audio (e.g. environmental absorption/reflectance modelling).

It is no more 'fake' as any other HRTF. HRTF is literally just a transfer function (Head Related Transfer Function) applied to a signal. Don't mix it up with environmental audio simulation (which can be done completely separately from HRTF, e.g. if you want to output to discrete speakers).

Titanfall 2, Gears 4, Path of the Exile, Forza 6, Dishonored 2, Redout, one of the Assassins Creed games uses it, whole piles of VR games do it, Unreal Engine has it as an available option, etc. HiAlgo can retrofit it to existing games but only works kinda-OK at best. It's also a technique that benefits on pixel-limited scenes, but not if the bottleneck is some other situation (e.g. geometry, texture loading, on-GPU physics, etc).

RAM is extremely expensive at the moment due to massive demand and long lead times to set up new fabs*. This is why prices across the board remain high, demand is outstripping supply. It also hits the Catch 22 of only being of benefit when massivew textures are used, and nobody would ship massive textures without GPUs able to load them. On top of that, increasing texture resolution only benefits fidelity at extremely short distances: as soon as you get nay further away, you hit the next MIPmap level and the texture resolution used drops (out of necessity to avoid sampling artefacts) and you're back down to the same visual quality as everyone else. Or in other words: increasing texture size can be though of as adding an extra level 'on top of' an existing MIPmap.

* Consider the common conspiracy theories that manufacturers are sitting on piles of unsold cards (often just attributed to the 10xx series, despite all cards seeing the same price trends), and that prices are being kept massively higher to inflate margins. The first manufacturer to drop prices down to 'normal' margins would see a massive sales increase compared to everyone else and would quickly be able to sell off their supposed 'excess' stock.

The motherboard hrtf fake is because it uses the 5.1 mix channel instead of being native to the engine. Audio engine -> 5.1 channel 3d mix >HRTF vs the audio engine handle the HRTF. It uses the crappy 5.1 channel mix instead. I also include environmental modeling in hrtf but you're right that people consider that separate.

Real HRTF uses the player space inside the game engine to generate a 2 channel instead of mixing down 5.1 back to 2 channel. It should also be modeling head shape as well to improve the spatial awareness. I only know of CS2 being a major game to have this built in the engine.

They are charging us $1200 for a gpu with some RT cores that tanks your fps so you're playing 1080/144hz on essentially a titan V. I would say while ram is expensive I think it would be cheaper the increase cost in yield for more RT cores.

There is still much to be done in textures. We don't even need ram chips, onboard SSD texture swapping can also work. Any game I've played despite texture maxed out, still looks fuzzy and not crisp near my character. Increasing the texture quality also doesn't kill your fps.
 

VegetableStu

Shrink Ray Wielder
Original poster
Aug 18, 2016
1,949
2,619
Yes, this is their vapour chamber heatsink. And the FE is factory overclocked by 90MHz too. Looks like Nvidia wants to position the FE not more as a plain vanilla version.
not only that! I'm planning to deshoud and fill it at the last slot right over two active fans against the case! YAY!