• Save 15% on ALL SFF Network merch, until Dec 31st! Use code SFF2024 at checkout. Click here!

GPU Geforce 20 series (RTX) discussion thread (E: 2070 Review unbargo!)

TheHig

King of Cable Management
Oct 13, 2016
951
1,171
Yeah , smart play really. This is first gen and historically the next iteration is where it gets good. In a way the lack of competition allows Nvidia to take this risk right now and possibly make a leap forward while keeping market share even if it comes up short now. Or it’s hair works all over again? Lol

Interesting times ahead.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
The motherboard hrtf fake is because it uses the 5.1 mix channel instead of being native to the engine. Audio engine -> 5.1 channel 3d mix >HRTF vs the audio engine handle the HRTF. It uses the crappy 5.1 channel mix instead. I also include environmental modeling in hrtf but you're right that people consider that separate.

Real HRTF uses the player space inside the game engine to generate a 2 channel instead of mixing down 5.1 back to 2 channel. It should also be modeling head shape as well to improve the spatial awareness. I only know of CS2 being a major game to have this built in the engine.
Again, you're getting mixed up between two different things:
The HRTF is a static transform that takes into account things that don't change dynamically (i.e. your head remains your head) and applies it to an audio stream. This can be applied to a stereo or a 5.1 (or 7.1, or however many discrete channels you want to downmix to). The modelling of sound propagation around the head is done once offline to produce the transform, and this transform is used for all real-time processing. Because it's a fixed transform it is very simple and efficient to apply, but it also only applies to the head-related effects of sound (i.e. those that allow easier discrimination of source direction).
The sound engine is what deals with the dynamic effects of the (relative to the head) moving environment on sound sources within the environment. Reflections, scattering, frequency-dependant attenuation, etc. It's what generates the stereo (or multichannel) mix that then gets fed to the HRTF. You could in theory model the head in this engine and use that in lieu of a HRTF, but you'd just be wasting processing time for no benefit (and depending on the fidelity your real-time engine is capable of, possibly even get worse results).
There is still much to be done in textures. We don't even need ram chips, onboard SSD texture swapping can also work.
Putting an SSD on the card has no benefit for gaming use (and is functionally equivalent to DMA access over the PCIe bus that has been used for years). Getting textures from backing storage over the PCIe bus to the card is not a bottleneck on performance. If you were to try and keep textures out of vRAM and only load them on the fly (as opposed to the curent practice of caching every level texture onto vRAM until you run out of vRAM or run out of textures, and agressively flush those cached textures if that speace is needed for active tasks) then you would only see performance regression.
Any game I've played despite texture maxed out, still looks fuzzy and not crisp near my character. Increasing the texture quality also doesn't kill your fps.
That's down to texture filtering, increasing texture resolution would only make the problem worse (nameby by introducing aliasing). MIPmaps change level based on the pixel-to-texel ratio (for many texels a pixel samples) based on absolute MIPmap size, not relative MIPmap size. If the optimum MIPmap level is 6464 for a given draw distance, the 64x64 MIPmap will be used regardless of what the MIP level 0 ('native' texture) resolution is. Thus, increasing texture resolution enables higher fidelity for closer and closer objects, but does not affect fidelity once you start stepping down MIP levels. See this image for example: anyhting at the 512x512 MIP or below would be unaffected by any increases is texture size.
 

Thehack

Spatial Philosopher
Creator
Mar 6, 2016
2,813
3,670
J-hackcompany.com
Again, you're getting mixed up between two different things:
The HRTF is a static transform that takes into account things that don't change dynamically (i.e. your head remains your head) and applies it to an audio stream. This can be applied to a stereo or a 5.1 (or 7.1, or however many discrete channels you want to downmix to). The modelling of sound propagation around the head is done once offline to produce the transform, and this transform is used for all real-time processing. Because it's a fixed transform it is very simple and efficient to apply, but it also only applies to the head-related effects of sound (i.e. those that allow easier discrimination of source direction).
The sound engine is what deals with the dynamic effects of the (relative to the head) moving environment on sound sources within the environment. Reflections, scattering, frequency-dependant attenuation, etc. It's what generates the stereo (or multichannel) mix that then gets fed to the HRTF. You could in theory model the head in this engine and use that in lieu of a HRTF, but you'd just be wasting processing time for no benefit (and depending on the fidelity your real-time engine is capable of, possibly even get worse results).
Putting an SSD on the card has no benefit for gaming use (and is functionally equivalent to DMA access over the PCIe bus that has been used for years). Getting textures from backing storage over the PCIe bus to the card is not a bottleneck on performance. If you were to try and keep textures out of vRAM and only load them on the fly (as opposed to the curent practice of caching every level texture onto vRAM until you run out of vRAM or run out of textures, and agressively flush those cached textures if that speace is needed for active tasks) then you would only see performance regression.
That's down to texture filtering, increasing texture resolution would only make the problem worse (nameby by introducing aliasing). MIPmaps change level based on the pixel-to-texel ratio (for many texels a pixel samples) based on absolute MIPmap size, not relative MIPmap size. If the optimum MIPmap level is 6464 for a given draw distance, the 64x64 MIPmap will be used regardless of what the MIP level 0 ('native' texture) resolution is. Thus, increasing texture resolution enables higher fidelity for closer and closer objects, but does not affect fidelity once you start stepping down MIP levels. See this image for example: anyhting at the 512x512 MIP or below would be unaffected by any increases is texture size.

Concerning hrtf I see what you mean now, by its technical term it is just a fixed function. Would the correct term be 3d spatial audio? Then yes, I absolutely would love that. (CSGO implements 3D audio and they call it HRTF mode)

The mix of 5.1 to hrtf is surround sound, not 3d sound. Unless I'm missing something, 5.1 does not account for sounds on the Z plane. There are losses in fidelity there. I would say audio adds huge immersion benefit. Our audio stimulus is much faster than visual. I realize marketing says visual sells more but I'd love a game 3d spatial audio, with environmental processing. Head tracking for audio is something that hasn't been implemented outside of VR, and is also something I'd like to see implemented in our games.

So if we were to add enormous amount of texture swapping the PCIE is able to keep up? And yes I'm talking about increasing the texture of up close objects. Obviously objects more than 20 feet away look fine but crawling in grass or walking in corridors the texture looks ass. Of course the weapon texture model looks great but then you look at the other characters and it looks much lower resolution. And the fact it is flat. Fauna has volume.

The fact that texture is unnatural and repeats... Our eye is very good at recognizing patterns. The game world, by design is very inorganic. We notice the repeating grass polygons, textures, that walls are the same. If you ever seen an ancient building, you wouldn't expect all its stones on place? And that the walls are at perfect 90 degree angles? Perhaps use those AI cores to insert imperfections.
 
Last edited:

Biowarejak

Maker of Awesome | User 1615
Platinum Supporter
Mar 6, 2017
1,744
2,262
Concerning hrtf I see what you mean now, by its technical term it is just a fixed function. Would the correct term be 3d spatial audio? Then yes, I absolutely would love that. (CSGO implements 3D audio and they call it HRTF mode)

The mix of 5.1 to hrtf is surround sound, not 3d sound. Unless I'm missing something, 5.1 does not account for sounds on the Z plane. There are losses in fidelity there. I would say audio adds huge immersion benefit. Our audio stimulus is much faster than visual. I realize marketing says visual sells more but I'd love a game 3d spatial audio, with environmental processing. Head tracking for audio is something that hasn't been implemented outside of VR, and is also something I'd like to see implemented in our games.

So if we were to add enormous amount of texture swapping the PCIE is able to keep up? And yes I'm talking about increasing the texture of up close objects. Obviously objects more than 20 feet away look fine but crawling in grass or walking in corridors the texture looks ass. Of course the weapon texture model looks great but then you look at the other characters and it looks much lower resolution. And the fact it is flat. Fauna has volume.

The fact that texture is unnatural and repeats... Our eye is very good at recognizing patterns. The game world, by design is very inorganic. We notice the repeating grass polygons, textures, that walls are the same. If you ever seen an ancient building, you wouldn't expect all its stones on place? And that the walls are at perfect 90 degree angles? Perhaps use those AI cores to insert imperfections.
You're probably much better served by discussing this in the Unity forum and not cluttering this thread. I love game design, don't get me wrong, but it feels tangential at best to the discussion at hand.
 

Biowarejak

Maker of Awesome | User 1615
Platinum Supporter
Mar 6, 2017
1,744
2,262
Definitely. The tech is incredible, there's just certain aspects that don't market well.
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
On Jay's video -- if it's true that Tomb Raider only gets 30 fps with RT on at 1080p... Wow. So your choice is a 20% improvement with RT off or 30-40 FPS at 1080p with RT on. For $1200... Who spends $1200 on a graphics card and then plays at 1080p.

And I disagree on the 2080 Ti being the new Titan equivalent -- Titan was an early adopter product for rich people to brag about. They're now making that the mainstream. There won't be a a 2080 Ti Ti that will offer the same performance for $700-800. The other reasons for the simultaneous launch is 1) The 2080 won't outperform the 1080 Ti in non-RT tasks and 2) This generation will be short-lived, so they wanna cash in on 10 years of R&D while they can. And I get it. But from a consumer point of view, if you care about value you should wait for 7nm, RT maturity, and competition from AMD/Intel.

Edit: I don't fully buy that slide, I'm waiting for 3rd party leaks. Even if it's true, you're seeing 30-40% improvement in most titles (not counting DLSS), which is still underwhelming. The 1080 was ~70% faster than the 980.
 
Last edited:

VegetableStu

Shrink Ray Wielder
Original poster
Aug 18, 2016
1,949
2,619


the darker bar is a bit more than what I'm expecting for traditional performance ._. although I'd like to see comparison images as well for upsampled and DXRT/RTX.
along with more review datapoints as well
 

TheHig

King of Cable Management
Oct 13, 2016
951
1,171
4K.
Global high settings.
Pick a title.
1080ti vs 2080ti.
What are the FPS?

RTX is new. Bleeding edge and not worth the cost of entry if the above comparisons aren’t extremely favorable for the 2080ti.

Don’t preorder a 60+ dollar game.
Don’t preorder a 1200 dollar gpu with zero real world performance data.

My 2c
 

ChinStrap

Cable-Tie Ninja
Sep 13, 2017
203
179
Mr. Huang is a smart guy, he know people think FPS is king. Technology or not, showing tomb raider @ 30FPS will draw attention. That’s exactly what he wants. He know if you’re talking about NV good or bad, you’re not talking about AMD/Intel. The amount of press/video/clicks this garbage is generating is pretty impressive. I normally don’t read much tech news on my phone and I have ‘RTX 20XX’ stories popping up in my google now news feed.

That’s exactly what Mr. Huang wants.


Personally, the bang for the buck deals you can get right now and pretty nice. 1080s for $439, and 1070s for $296. I think (and as others have said here) the 20X0 series might not be very ITX friendly.
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
By the way, I think we can say goodbye to the dream of an ITX-sized 2080, given the TDP. Even a 2070 is unlikely, though Gigabyte could probably adapt their current design. And obviously the 2080 Ti is out of the question.
 

QuantumBraced

Master of Cramming
Mar 9, 2017
507
358
I don't really consider Zotac's Mini cards ITX. I appreciate the effort, but they don't fit in any case that is made for short cards. Still kudos to them for advancing SFF.
 

CC Ricers

Shrink Ray Wielder
Bronze Supporter
Nov 1, 2015
2,234
2,557
These RTX cards may not be the panacea for gamers at this moment, and the Founder's Edition are priced higher as usual, but you know what, I do really like the design of these Founder's cards. The wrap around aluminum shell looks very nice, and if the fans are as quiet as they claim to be during gaming (not sure how with more TDP, must be the different fan design?), I'd be okay with more silence for the performance.

The 2070 FE is also 9 inches long, which while it is not really ITX sized it is 1.5 inches shorter than reference and could still fit some more compact spaces with that. Realistically if I were to get a RTX card it would be the 2070 for lower cost and its potential for smaller size.
 

tinyitx

Shrink Ray Wielder
Jan 25, 2018
2,279
2,338
I think 2070 has a high potential of having ITX size variants. But I notice one thing. The 8-pin power connector is positioned not on the top of the card but at the end of the card, while the 2080 and 2080Ti have the power connectors at the more traditional top position.
I wonder if this FE design will be carried over to AIB cards.
 
  • Like
Reactions: loader963

loader963

King of Cable Management
Jan 21, 2017
664
569
I think 2070 has a high potential of having ITX size variants. But I notice one thing. The 8-pin power connector is positioned not on the top of the card but at the end of the card, while the 2080 and 2080Ti have the power connectors at the more traditional top position.
I wonder if this FE design will be carried over to AIB cards.

I totally overlooked that. In a lot of larger cases, I would actually love that. Evga even made a connector that would do that with reference cards.

Now that would utterly ruin my current build however. I think most of our sff cases and even a few mainstream itx cases would have a problem with it.
 

CC Ricers

Shrink Ray Wielder
Bronze Supporter
Nov 1, 2015
2,234
2,557
My guess is that for the 2070, the 8-pin connector is at the end of the card because, being shorter, it has an extra inch and a half of empty room at the end where a normal reference card size will take up.
 

rfarmer

Spatial Philosopher
Jul 7, 2017
2,669
2,793
4K.
Global high settings.
Pick a title.
1080ti vs 2080ti.
What are the FPS?

RTX is new. Bleeding edge and not worth the cost of entry if the above comparisons aren’t extremely favorable for the 2080ti.

Don’t preorder a 60+ dollar game.
Don’t preorder a 1200 dollar gpu with zero real world performance data.

My 2c

This is what I am waiting for too, show me performance for games I actually play at settings I am willing to dish out $1200 for. I can't imagine anyone purchasing a 2080 Ti is gaming at less than 1440, most will be at 4k.
 
  • Like
Reactions: TheHig