GPU NVIDIA GeForce RTX 2080 Ti review, SFF style.

3lfk1ng

King of Cable Management
Original poster
SFFn Staff
Bronze Supporter
Jun 3, 2016
906
1,713
www.reihengaming.com
Another long but thorough read, we review the NVIDIA GeForce RTX 2080 Ti.

Note: This is going to be an ongoing review, a first for SFFn, and it will see several updates throughout the following months. Updates will be appended to the article and also in this thread.

Price aside, the GeForce RTX 2080 Ti is definitely terrific performer in today's titles, there is no denying that. However, we're nearly two months past its launch and still left wondering what the card is actually capable of.

Read the review here.
 

3lfk1ng

King of Cable Management
Original poster
SFFn Staff
Bronze Supporter
Jun 3, 2016
906
1,713
www.reihengaming.com
They are RGB indeed but the internal controller has the color set to green and green ONLY. I've not found a way to change them yet. I think JayzTwoCents discovered that as well.

Corsair, ASUS, and EVGA tools do not recognize them either and I didn't find any visible way to access any header that could connect to an RGB controller.
 
Last edited:
  • Like
Reactions: Windfall

el01

King of Cable Management
Jun 4, 2018
770
588
I think the 001 in the fan name refers to...erm... revision 001??

the card looks subjectively ugly to me though. more of a hard-edged-low-poly person if you know what I mean.
 

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
They are RGB indeed but the internal controller has the color set to green and green ONLY. I've not found a way to change them yet. I think JayzTwoCents discovered that as well.
jay brought the FE shroud to an EVGA reference PGB and the RGB works o_o
 

rfarmer

Spatial Philosopher
Jul 7, 2017
2,588
2,702
Nice review, wish the price was more reasonable.

You state that the Intel CPUs only have 16 PCI lanes so when used with a M.2 PCI 3.0x4 SSD you only have x8 available for the GPU. While it is true the CPU only has 16 lanes the z370 and z390 motherboards have an additional 24 PCI lanes.

I have a M.2 PCI 3.0x4 SSD and still have full x16 for my GPU.

 

TeutonJon78

Average Stuffer
Jun 7, 2018
87
37
How can Nvidia make it's top of the line GPU in a perfectly good sized frame, but all the OEMS need to make these giant monstrosities?
 

3lfk1ng

King of Cable Management
Original poster
SFFn Staff
Bronze Supporter
Jun 3, 2016
906
1,713
www.reihengaming.com
You state that the Intel CPUs only have 16 PCI lanes so when used with a M.2 PCI 3.0x4 SSD you only have x8 available for the GPU. While it is true the CPU only has 16 lanes the z370 and z390 motherboards have an additional 24 PCI lanes.

We removed that bit to look further into the matter. It was more in-depth when the review went live.

However, @||| has informed us that PCIe lanes are not as clear cut and dry as their circuitry can vary from one motherboard manufacturer to another -even regardless of the chipset. Our intention was to exclude the screwiness that comes with PCIe switches on some ATX boards but it turns out that the issue was far more complicating than I could have imagined. While the issue hasn't been as prevalent since the z97 and z170 era, the board that I am on is from 2018 has a 2nd m.2 that is shared with the only PCIe x16 lane -thus dropping it to x8 when in use.

We're discussing the possibility to crowd-source and see what other motherboards might have this 'issue' and would like to cover it in future motherboard reviews alongside PCIe bifurcation.

Either way, there is enough hardware to get down to x8 and thankfully it wouldonly impact those with a 2080 Ti.

How can Nvidia make it's top of the line GPU in a perfectly good sized frame, but all the OEMS need to make these giant monstrosities?

I wasn't willing to take mine apart but the FE is much smaller due to the use of a vapor chamber HSF. The technology is nothing new but it's still more costly and difficult/expensive to manufacture. It's not exclusive to the FE either, some of the other OEMs like GALAX have chosen to employ it as well.

Excellent review, thank you.

Thank you, we will be improving on it as we go so stay tuned :)
 
Last edited:

rfarmer

Spatial Philosopher
Jul 7, 2017
2,588
2,702
Thanks for the response @3lfk1ng I can well believe different boards behave differently. My Gigabyte z170 was only giving me x8 for the GPU even though the chipset has additional lanes, I am just glad it is working properly on my z370.
 

annasoh323

Master of Cramming
Apr 4, 2018
424
314
Excellently thorough review that captures (IMO) the right balance of data and interpretation/analysis. It seems like everyone is in the same boat as far as DLSS and RT go, so not holding my breath for seeing those parts of the article filled in anytime soon. Sounds like it's a good time for Club Pascal to lay low and see what the future may hold.

<mumbling> well, if I shouldn't upgrade, maybe I can find a different excuse to build a new machine altogether... SFF is a terrifying drug.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
A few nit-picks:
This black shield in the center of the card is the false cover that needs to be popped off should you ever wish to take the take the GPU apart.
The adhesive-attached front plate only needs to be removed to disassemble the cooler into its individual components. To remove the cooler entirely form the card (e.g. for watercooling) you only need to remove the screws in the backplate and the whole thing comes off, same as with the previous FE and reference cards.
overclocking
From Derbauer's testing, the RTX cards respond better to OCed memory and stock GPU core clocks than a GPU core overclock and stock memory. Slightly better performance than a max power target 100% fan OC, but with not noticeable increase in power consumption, so likely a good choice for SFF.
Instead, most everyone prefers the use of M.2 SSDs as they create no clutter and consume no space. As a result of this decision, many users are running their PCIe 3.0 x16 lanes at x8 with the M.2 drive(s) running at x4 (you can confirm this using GPU-Z and looking at the Bus Interface). This causes no measurable impact on performance for previous generation cards like the GeForce GTX 1080ti but with the GeForce RTX 2080ti, there are some small but measurable losses.
On Intel:
As far as I am aware, no consumer socket (Hx series, AKA LGA 115x) boards feed the m.2 slots with CPU lanes. ALL feed them with lanes from the PCH, which has its own dedicated DMI 3.0 link (similar to PCIe 3.0 x4) to the CPU that is not shared by the CPU PCIe lanes, laving the x16 PCIe slot unencumbered. This is because Intel's PCIe RAID works with chipset lanes, but not CPU lanes.
Things are a bit different on the 'enthusiast'/HEDT Xx99 platforms (which practically means the two ASRock ITX boards). There, there are plenty of CPU PCIe lanes to feed multiple m.2 slots along with an x16 slot for a GPU, and the CPU supports VROC (RAID on CPU PCIe lanes). However, the ASRock x299 ITX/AC has only two of it's m.2 ports connected to CPU lanes. The third is connected to the PCH, so it can be used for an Optane transparent cache.
On AMD:
With Ryzen, a single m.2 slot can be fed by an extra x4 PCIe 3.0 link from the CPU in addition to the x16 PCIe 3,0 link used for a GPU (or other card). Any additional m.2 slots are instead PCIe 2.0 (not 3.0!) and fed from the chipset. The single theoretical exception are the A300/X300 'un-chipsets', where there is no chipsets and everything is connected to the CPU directly (as it has a handful of SATA and USB links on board). This in theory leaves the PCIe 3.0 x4 link usually occupied by the chipset free for an m.2 slot, but I have not seen a board do this in practice.
With Threadripper/Epyc, there are plenty of CPU PCIe 3.0 lanes available, but these are a pretty poor choice for gaming i nthe first place, so rather a moot point.
Worse yet, if 12nm is replaced by 7nm(Ampere) by the start of 2020, this generation may never truly get a chance to shine.
The chance of 7nm being able to produce large dies (like the Pascal and Turing cards) anytime soon seems pretty low. The physics issues that TSMC face are identical to those Intel are facing with 10nm, and have also only been able to produce similarly small dies at acceptable volumes. 7nm may produce efficient small-die GPUs, but high-end GPUs are going to be sticking to larger processes or commanding very high prices (e.g. the Radeon Instinct MI60, at 330mm^2 around the size of the GTX1070's GP104 but with a pricing target in the GV100 realm). On top of process scale issues, demand will also provide upward pricing pressure: Apple are eating all TSMC's limited output, and next in line are AMD for low-volume-high-margin parts (and with a departure from the shared-die approach used on Zeppelin to Epyc getting a separate die from 'Zen 3' likely meaning those parts will remain on 14nm or move to Samsung's 10nm/8nm which are of the same feature size as 14nm).
tl;dr a 7nm Ampere may pick up the 'lower end' (2060 on down) but are unlikely to occupy the higher end range the 2070/2080 do. Nvidia are likely making thin margins (remember their sales are mostly die-to-AIB rather than FE cards) on TU102 unless yeilds are truly exceptional*, a similar performance class GPU (even an unchanged die-shrink) on 7nm would cost more, not less; cost/transistor has been rising since 28nm.
DLSS is being advertised as a way to make supported games run approximately 40% faster.
How? From what I understand, DLSS appears to be rendering the scene at a lower resolution, upscaling it, and using AI to make the picture closely match the resolution that it’s set to emulate.
In most cases, the quality difference won’t be noticeable, but the supporting title will run at higher framerates than it would at that higher resolution.
In theory, it sounds very compelling -an absolute dream for improved performance in 4k/VR, but there isn’t much information to go off of. Nobody quite knows how this technology works so we’re left scratching our heads and guessing at this point.
How it works is (relatively) simple: Nvidia uses their big SaturnV Turing-powered supercomputer to render demo runs of a game to two resolutions simultaneously: a render target resolution (e.g. 2560x1440, AKA 'low res') and a 'final' resolution (e.g. 3840x2160 AKA 'high res') with 64x SSAA (the SSAA here is to produce nice alias-free training images for the NN, because aliasing is high frequency noise that the NN could mistakenly view as a desired output). A NN is then trained on that mountain of frames to go "if you see feature X in 'low res', it should look like the same location in 'high res'". With sufficiently variation in training frames (i.e. the demo loop traverses all levels, views every entity in every angle in as many lighting conditions as possible, etc) you have a NN that takes a locally rendered 'low res' frame, and will spit out a 'high res' frame. In short, it's a 'smart' upscaled tuned to a specific game, but tuned by brute force rather than manual tweaking. The massive amount of work done to train the NN (and produce its training datasets) is all done offline only once, to produce a relatively lightweight 'inference' NN that can be run by everyone in real-time.
At this point, you might be asking, “Why not go wireless?” and the answer is simple, there doesn’t exist a consumer-grade wireless solution that is safe to mount on your head that uses little power to transmit that next level of information.
It;s entirely down to there being n system with an acceptable combination of high bandwith and low end-to-end latency. Safety has nothing whatsoever to do with it as a: RF is non-ionising, so causes only surface heating (as would holding a warm mug to your head) and b: the transmitter is across the room, not on your head.
Overbuilt
Personally this would be a 'pro' too: it's rigid as a board, no 'GPU sag' possible unless your rear PCIe bracket itself is flimsy enough to bend!

----

* This is into unrelated wild-ass-guess territory, but my suspicion is that abnormally high Turing yeilds early in the development cycle may well be the reason for the rushed release. Nvidia may have been expecting to have only a handful of viable dies to feed the Quadro RTX cards and to later sell off the lower binned remnants as a 'Titan RTX', but ended up with a glut of quality dies and popped out the GeForce RTX series at short notice. This would explain why the PCBs are Quadro-grade overspecced (not enough time to design newer consumer-grade PCBs, and why the RTX 2070 came out with a delay), the coolers are overbuilt (not enough time to pare the design down for efficient manufacture), why the lower end of the range is absent (Ampere would have lacked RT and Tensor cores entirely), and why RT and DLSS features are only starting to be implemented in engines (intention was for the Quadro RTX cards to seed development before a GeForce RTX launch in later generations). As for why they would rush them to market? The combination of high GPU pricing that could support an experimental card launch, and complete lack of competition in the high end, lets them jump the gun on hybrid rendering. It's also a bet that's worked out well for them in the past after all (Hardware T&L, unified shaders).
 

3lfk1ng

King of Cable Management
Original poster
SFFn Staff
Bronze Supporter
Jun 3, 2016
906
1,713
www.reihengaming.com
You'll dig deep into anything, this is why I love you @EdZ. Thank you for the detailed and elaborate reply, much appreciated!

Yes, an additional 600MHz on the memory is as easy as an additional 500MHz was on Pascal. However, I'm not sure if all OEMs are using the same memory and/or if they will continue to once the price drops so I chose to just perform a basic overclock the GPU as that SKU isn't likely to change. To make matters worse, the Voltage slider in EVGA's Precision X1 doesn't currently work, it doesn't actually adjust the voltage, in fact, it's been so buggy that it cannot save any settings/presets.

As far as I am aware, no consumer socket (Hx series, AKA LGA 115x) boards feed the m.2 slots with CPU lanes.

According to W1zzard: "installing the RTX 2080 Ti in the topmost x16 slot of your motherboard while sharing half its PCIe bandwidth with another device in the second slot, such as an M.2 PCIe SSD, will come with performance penalties, even if they're small."

So while it's not likely to affect ITX it might still be worth mentioning(??) TBPH, I'm not really sure how to illustrate this point other than, it might be a thing now that surely won't be a thing when PCIe 4.0 drops.

On AMD, it get's overly confusing and my original attempt to simplify it wasn't as clear and in depth as needed.

I could have researched more into the issue but I think it would warrant an entire article by itself.

The chance of 7nm being able to produce large dies (like the Pascal and Turing cards) anytime soon seems pretty low.

Yea, that's why I mentioned Ampere. It supposed to be their 7nm flagship from the rumor mill. I just hope it provides a greater jump than the small Volta to Turing hop. I'm guessing that we will learn more in the 1st half of next year but I don't think we will see a consumer launch until early 2020.

Safety has nothing whatsoever to do with it

Ehh, I've used Meraki wireless units that are strong enough to cause strong headaches so I would like that little wireless quip to remain. I definitely wouldn't want anything stronger mounted(or pointed) towards my head.

The article has been updated. Thanks again EdZ.
 
Last edited:
  • Like
Reactions: EdZ

Kandirma

Trash Compacter
Sep 13, 2017
54
40
This not only cuts down on dev time but it makes everything look infinitely more realistic

I think this is an important note - it will cut down on dev time in the far off future where everyone has it. Until then it is an additional feature to be implemented.

And, at least in competitive games, 'more realistic' lighting is going to be a CON not a PRO in many situations as it will make shadows darker instead of just applying a base level of light everywhere and then darkening select areas. It Might help for seeing shadows of players, but I suspect in most competitive [fps] games people will be disabling RTX for as long as they can to prevent shadows/dark areas from being truly dark.
 

Elerek

Cable-Tie Ninja
Jul 17, 2017
228
165
The chance of 7nm being able to produce large dies (like the Pascal and Turing cards) anytime soon seems pretty low. The physics issues that TSMC face are identical to those Intel are facing with 10nm, and have also only been able to produce similarly small dies at acceptable volumes. 7nm may produce efficient small-die GPUs, but high-end GPUs are going to be sticking to larger processes or commanding very high prices (e.g. the Radeon Instinct MI60, at 330mm^2 around the size of the GTX1070's GP104 but with a pricing target in the GV100 realm). On top of process scale issues, demand will also provide upward pricing pressure: Apple are eating all TSMC's limited output, and next in line are AMD for low-volume-high-margin parts (and with a departure from the shared-die approach used on Zeppelin to Epyc getting a separate die from 'Zen 3' likely meaning those parts will remain on 14nm or move to Samsung's 10nm/8nm which are of the same feature size as 14nm).
tl;dr a 7nm Ampere may pick up the 'lower end' (2060 on down) but are unlikely to occupy the higher end range the 2070/2080 do. Nvidia are likely making thin margins (remember their sales are mostly die-to-AIB rather than FE cards) on TU102 unless yeilds are truly exceptional*, a similar performance class GPU (even an unchanged die-shrink) on 7nm would cost more, not less; cost/transistor has been rising since 28nm.

So what are the odds of GPUs moving to a "glued together" format like Ryzen CPUs and Intel's new HEDT CPUs?
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
So what are the odds of GPUs moving to a "glued together" format like Ryzen CPUs and Intel's new HEDT CPUs?
The big limitation of segmented chips is that it puts a massive constraint on bandwidth and latency between the 'chiplets' (even with EMIB, the problem is far worse with through-substrate links). Thinking of a multi-die package like a multi-socket system crammed into a small space gets you into the right state of mind for thinking about how to optimise for that sort of device, and likewise with GPUs you would need to think about how to optimise for multiple discrete GPUs to get decent results out of a segmented GPU. GPUs are also much more sensitive to bandwidth than CPUs, as most graphical operations revolve around loading a big chunk of data, applying a relatively simple transform to all of it, then storing that big chunk of data ready for the next rendering step.
For HPC, this seems a pretty decent way to go. Dangling a bunch of discrete GPUs off of the PCIe bus, or strung across an NVLink mesh, is the standard way to add GPGPU to a HPC cluster, and it works pretty well. Adapting existing techniques for job distribution to account for segmented GPUs is not a large leap for HPC, so it seems a good route forwards for reticle-bustingly large GPUs targeted at HPC like GV100.
For consumer GPUs, I think it would be a pretty bad choice. There, you are basically only ever running one single job at any one time on any number of GPUs you may have (i.e. you are not going to be playing two games at once) which means all the problems that multi-GPU has today would apply to a segmented GPU. Over a decade of multi-GPU effort has pretty much petered down to "we give up" at this point, so developing a segmented GPU for consuemr use has two uphill battles to fight: designing an architecture that doesn't get hobbled by internal bandwidth limitations, and convincing the entire development community to reject the last decade and a half of direct experience with multi-GPU not paying dividends with "this time it's different! (for our new architecture only)".
 

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
And, at least in competitive games, 'more realistic' lighting is going to be a CON not a PRO in many situations as it will make shadows darker instead of just applying a base level of light everywhere and then darkening select areas.
could the engine do a filter pass on the raytracing pass? o_o i mean in-card colour filters are a thing...
 

Kandirma

Trash Compacter
Sep 13, 2017
54
40
could the engine do a filter pass on the raytracing pass? o_o i mean in-card colour filters are a thing...

I see no reason why it couldn't, but it'd kinda-defeat the point of having ray tracing.

The only thing I could see being a 'reasonable' solution is just bringing the black level up to a point where it's more of a dark grey
 

Elerek

Cable-Tie Ninja
Jul 17, 2017
228
165
The big limitation of segmented chips is that it puts a massive constraint on bandwidth and latency between the 'chiplets' (even with EMIB, the problem is far worse with through-substrate links). Thinking of a multi-die package like a multi-socket system crammed into a small space gets you into the right state of mind for thinking about how to optimise for that sort of device, and likewise with GPUs you would need to think about how to optimise for multiple discrete GPUs to get decent results out of a segmented GPU. GPUs are also much more sensitive to bandwidth than CPUs, as most graphical operations revolve around loading a big chunk of data, applying a relatively simple transform to all of it, then storing that big chunk of data ready for the next rendering step.
For HPC, this seems a pretty decent way to go. Dangling a bunch of discrete GPUs off of the PCIe bus, or strung across an NVLink mesh, is the standard way to add GPGPU to a HPC cluster, and it works pretty well. Adapting existing techniques for job distribution to account for segmented GPUs is not a large leap for HPC, so it seems a good route forwards for reticle-bustingly large GPUs targeted at HPC like GV100.
For consumer GPUs, I think it would be a pretty bad choice. There, you are basically only ever running one single job at any one time on any number of GPUs you may have (i.e. you are not going to be playing two games at once) which means all the problems that multi-GPU has today would apply to a segmented GPU. Over a decade of multi-GPU effort has pretty much petered down to "we give up" at this point, so developing a segmented GPU for consuemr use has two uphill battles to fight: designing an architecture that doesn't get hobbled by internal bandwidth limitations, and convincing the entire development community to reject the last decade and a half of direct experience with multi-GPU not paying dividends with "this time it's different! (for our new architecture only)".

On the flip side, nvidia and amd are still pushing multi-gpu with renewed efforts. If multi-die becomes necessary in the gpu space, it could finally push us to the place where multi-gpu is actually worth it and scales in every game because every gpu is effectively already multi-gpu by design.