Adoption may be quite a bit faster than that. Hardware T&L took off extremely rapidly, as did the introduction of pixel & vertex shaders, and the switch to unified shaders.
I'm not expecting 7nm (or 10nm or any similarly scaled process) to offer any dramatic performance gains - if any at all - same as with the last few scalings, or any real price gains due to dramatically increased process complexity (both SAQP and EUV have yet to be used in production, and everyone is having problems getting yields up on large dies). Future gains will come from process maturity and design improvements, gains from pure "fab the same thing but smaller" shrinkings vanished the better part of a decade ago.
The Ray Tracing tech seems really promising, but for me it's hard to justify spending that kind of money just for that considering the 1080 I have is still serving me well. I'm interested in the performance gains in non-RTX scenarios as it hasn't really been addressed in the presentation.
Oh, yeah, that's entirely possible. Even so, let me move the goalposts a bit by asking how well the cards that debuted each of those technologies held up compared to the cards that followed them? If ray tracing adoption takes off like a wildfire (which is something I earnestly hope for!), how will the 20-series' ray tracing performance stack up in a few years? How long will it remain relevant?Adoption may be quite a bit faster than that. Hardware T&L took off extremely rapidly, as did the introduction of pixel & vertex shaders, and the switch to unified shaders.
https://www.anandtech.com/show/13261/hands-on-with-the-geforce-rtx-2080-ti-realtime-raytracing
It's pretty much a shit show. They ran BFV at 1080P on a 2080ti to show off RT. RT a huge resource drain.
Yeah... The more I read the more I lean toward canceling my 2080 Ti preorder. It just doesn't seem worth it all things considered. At least Nvidia lets you cancel before the item is shipped, so I have until Sept 20th to decide. Hopefully we'll get more benchmarks by then.
Improved lighting is what is going to improve 'texture quality'.The textures are still blatantly CGI, I'd be much more impressed by improvement in texture quality than lighting.
Improved lighting is what is going to improve 'texture quality'.
PBR was a start, but requires a precipitous stack of effects passes to get decent looking results out of it (point lights, area lights, global lights, contact hardened shadows, screen-space ambient occlusion, specular passes (per light), cubemaps, etc...) to approximate the results of what raytracing achieves directly. 'Add more hacks' isn't going to be of much value, as the remaining cases where the current techniques are insufficient (transparency, reflections, etc) require massive increases in performance. Real-time accurate reflections for example require dynamic cubemaps, which would mean adding an extra 6 viewports per scene (to render the 6 cube faces, due to the limitations of rectilinear rendering) which for a geometry-limited scene would require a 7x speedup. Even a single flat planar reflection (mirror) requires an additional viewport and doubling of performance, which is why vanishingly few games have added real-time reflections for mirrors for the past decade or so (and the few that have do so with very specific hacks in specific circumstances, like 'reflecting' just the player model by making the mirror a hole and adding a scene of the room with the player in behind it).
Voxel global illumination is a better method of handling GI without causing massive performance hint. A 2080 ti running at 1080P/144hz is outrageous just for some shadows and reflections. And even then, it is mostly useful for reflections, which not every game is relevant. You won't see a whole lot of reflections on tomb raider or something like uncharted.
You can improve texture quality by using higher resolution texture with actual bump mapping. Cheaper than RT or tesselation or modeling.
Here are some ways to get more fidelity without completely killing your frame rate:
1. Fauna that isn't just a 2d texture on a 2d polygon. Actual volumetric polygon.
2. Procedurally generated assets to where not everything is copy and paste. Repeating textures, polygons, etc none of this is represented in real life. Everything is imperfect. Walls aren't always 90 degrees and our mind can definitely tell.
3. Particles and physics. See physx failure due to being proprietary tech and hard to implement.
4. Voxel illumination, cheaper ambient occlusion and global illumination without killing your fps.
5. Hardware accelerated sound stage and physics, using modeledH RTF.
6. Better AI.
7. High res texture.
8. Dynamic resolution.
Seeing the demos and how much in tanks fps I feel like RT is just gimmick. I hope to be proven wrong though. I'm sure at some point RT will take over raster but for now the performance hit and cost is too much for the average buyer.
VXGI is an extra member of the stack of raster lighting techniques, i.e. something you use in addition to[ the current assemblage of hacks.Voxel global illumination is a better method of handling GI without causing massive performance hint.
Texture res remains in a tradeoff with texture uniqueness (i.e. if you make texture resolution higher, you can have fewer unique textures). Bump mapping (plus normal mapping, plus specular mapping, plus a whole bunch of other pass-specific maps) has been standard for a long time.You can improve texture quality by using higher resolution texture with actual bump mapping. Cheaper than RT or tesselation or modeling.
Adding more polys to a scene is another tip in favour of raytracing. For raster stepping, adding more polygons increases total scene complexity whole RT scales with rendered pixel count and poly limit is down to memory space and bandwidth.1. Fauna that isn't just a 2d texture on a 2d polygon. Actual volumetric polygon.
Procedural generation still require memory space (can't dispaly a texture you can't load), so trades off disc space requirements more than GPU vRAM requirements. Not haivng your walls meet at 90° is unrelated to this.2. Procedurally generated assets to where not everything is copy and paste. Repeating textures, polygons, etc none of this is represented in real life. Everything is imperfect. Walls aren't always 90 degrees and our mind can definitely tell.
This is the state we're in today, but refining these effects is only going to get more computationally expensive, not less. The low hanging fruits have been plucked, all that's left are the harder cases (reflective objects, lighting interreflecance).[4. Voxel illumination, cheaper ambient occlusion and global illumination without killing your fps.
VR is pushing a resurgence in sound modelling engines. HRTFs are already pretty common though often unused (ifg you motherboard has built in audio it very likely has a HRTF model) but that last-stage filtering is the easy part.[5. Hardware accelerated sound stage and physics, using modeledH RTF
Everyone wants this, but this falls into "make the image look better" in vagueness.6. Better AI.
Already a standard technique, used for easily decades (e.g. the original WipEout on PS1) with the rendered resolution changing per-frame across the frame based on performance. Maxwell brought in Multi-Res Shading, and Pascal Lens Matched Shading (doing the same but without explcit multiple draw calls) that split the display into chunks to be rendered at different resolutions, specifically for VR where the presence of optics in front of the panel changes pixel distribution. Turing has added dynamic resolution maps to vary this across the frame without needing to split into different viewports, another benefit of raytracing (and for VR specifically, allows for non-rectilinear rendering as nobody uses rectilinear optics).[8. Dynamic resolution.
VXGI is an extra member of the stack of raster lighting techniques, i.e. something you use in addition to[ the current assemblage of hacks.
Texture res remains in a tradeoff with texture uniqueness (i.e. if you make texture resolution higher, you can have fewer unique textures). Bump mapping (plus normal mapping, plus specular mapping, plus a whole bunch of other pass-specific maps) has been standard for a long time.
Adding more polys to a scene is another tip in favour of raytracing. For raster stepping, adding more polygons increases total scene complexity whole RT scales with rendered pixel count and poly limit is down to memory space and bandwidth.
Procedural generation still require memory space (can't dispaly a texture you can't load), so trades off disc space requirements more than GPU vRAM requirements. Not haivng your walls meet at 90° is unrelated to this.
This is the state we're in today, but refining these effects is only going to get more computationally expensive, not less. The low hanging fruits have been plucked, all that's left are the harder cases (reflective objects, lighting interreflecance).
VR is pushing a resurgence in sound modelling engines. HRTFs are already pretty common though often unused (ifg you motherboard has built in audio it very likely has a HRTF model) but that last-stage filtering is the easy part.
Everyone wants this, but this falls into "make the image look better" in vagueness.
Already a standard technique, used for easily decades (e.g. the original WipEout on PS1) with the rendered resolution changing per-frame across the frame based on performance. Maxwell brought in Multi-Res Shading, and Pascal Lens Matched Shading (doing the same but without explcit multiple draw calls) that split the display into chunks to be rendered at different resolutions, specifically for VR where the presence of optics in front of the panel changes pixel distribution. Turing has added dynamic resolution maps to vary this across the frame without needing to split into different viewports, another benefit of raytracing (and for VR specifically, allows for non-rectilinear rendering as nobody uses rectilinear optics).
Pretty much all the "we don't need raytracing, just add X!" techniques have already been added, which is why raytracing is being contemplated in the first place.
HRTF is applied to the downmixed stream, the engine is what needs to handle spatial audio (e.g. environmental absorption/reflectance modelling).engine based HRTF
It is no more 'fake' as any other HRTF. HRTF is literally just a transfer function (Head Related Transfer Function) applied to a signal. Don't mix it up with environmental audio simulation (which can be done completely separately from HRTF, e.g. if you want to output to discrete speakers).the motherboard is fake stuff, as the headphone based solution
Titanfall 2, Gears 4, Path of the Exile, Forza 6, Dishonored 2, Redout, one of the Assassins Creed games uses it, whole piles of VR games do it, Unreal Engine has it as an available option, etc. HiAlgo can retrofit it to existing games but only works kinda-OK at best. It's also a technique that benefits on pixel-limited scenes, but not if the bottleneck is some other situation (e.g. geometry, texture loading, on-GPU physics, etc).For pc, the dynamic resolution is not implemented in a major game, at least not one that I've seen advertised on PC. Xbox and Ps4 regularly does this.
RAM is extremely expensive at the moment due to massive demand and long lead times to set up new fabs*. This is why prices across the board remain high, demand is outstripping supply. It also hits the Catch 22 of only being of benefit when massivew textures are used, and nobody would ship massive textures without GPUs able to load them. On top of that, increasing texture resolution only benefits fidelity at extremely short distances: as soon as you get nay further away, you hit the next MIPmap level and the texture resolution used drops (out of necessity to avoid sampling artefacts) and you're back down to the same visual quality as everyone else. Or in other words: increasing texture size can be though of as adding an extra level 'on top of' an existing MIPmap.For texture and asset heavy stuff, it is cheaper to double up the Ram than charging us $1200 for a chip that runs RT on a 1080/144 monitor.
Yes, this is their vapour chamber heatsink. And the FE is factory overclocked by 90MHz too. Looks like Nvidia wants to position the FE not more as a plain vanilla version.holy crap, if the next generation also had founders edition cards like the 20-series, I'm going to buy straight from Nvidia O_O
HRTF is applied to the downmixed stream, the engine is what needs to handle spatial audio (e.g. environmental absorption/reflectance modelling).
It is no more 'fake' as any other HRTF. HRTF is literally just a transfer function (Head Related Transfer Function) applied to a signal. Don't mix it up with environmental audio simulation (which can be done completely separately from HRTF, e.g. if you want to output to discrete speakers).
Titanfall 2, Gears 4, Path of the Exile, Forza 6, Dishonored 2, Redout, one of the Assassins Creed games uses it, whole piles of VR games do it, Unreal Engine has it as an available option, etc. HiAlgo can retrofit it to existing games but only works kinda-OK at best. It's also a technique that benefits on pixel-limited scenes, but not if the bottleneck is some other situation (e.g. geometry, texture loading, on-GPU physics, etc).
RAM is extremely expensive at the moment due to massive demand and long lead times to set up new fabs*. This is why prices across the board remain high, demand is outstripping supply. It also hits the Catch 22 of only being of benefit when massivew textures are used, and nobody would ship massive textures without GPUs able to load them. On top of that, increasing texture resolution only benefits fidelity at extremely short distances: as soon as you get nay further away, you hit the next MIPmap level and the texture resolution used drops (out of necessity to avoid sampling artefacts) and you're back down to the same visual quality as everyone else. Or in other words: increasing texture size can be though of as adding an extra level 'on top of' an existing MIPmap.
* Consider the common conspiracy theories that manufacturers are sitting on piles of unsold cards (often just attributed to the 10xx series, despite all cards seeing the same price trends), and that prices are being kept massively higher to inflate margins. The first manufacturer to drop prices down to 'normal' margins would see a massive sales increase compared to everyone else and would quickly be able to sell off their supposed 'excess' stock.
not only that! I'm planning to deshoud and fill it at the last slot right over two active fans against the case! YAY!Yes, this is their vapour chamber heatsink. And the FE is factory overclocked by 90MHz too. Looks like Nvidia wants to position the FE not more as a plain vanilla version.