SFF.Network NVIDIA Set to Announce Pascal GTX GPU's Today

We've seen dribs and drabs of NVIDIA's upcoming Pascal refresh for their consumer graphics cards - from leaks of reference shrouds, to rumored (if not highly questionable) benchmarks. But the most recent slew of data has enough substance to suggest that it's the real deal, and the timing of the leak just before NVIDIA's announcement of a livestream event later today all but confirms that we're about to see the next generation of consumer flagships.

Eagle-eyed folks at videocardz.com caught some benchmarks published without identifiers indicating their GPU of origin (though driver names did spill the beans), and from that they've been able to construct a semi-complete specifications table that compares the rumored GTX 1080 and 1070 against the recently-released Tesla P100...

Read more here.
 

PlayfulPhoenix

Founder of SFF.N
Original poster
SFFLAB
Chimera Industries
Moderator
Gold Supporter
Feb 22, 2015
1,049
1,960
While Nvidia is adamant about comparing the 1080 vs the 980, price-wise it's going to be the 1080 vs 980Ti which to me matters most. Simply because I hope most buy a label or a brand, but a product and as such take performance and price into consideration. Although I see the merit in comparing a previous generation card to it's successor, that previous card has been available for 25% less for about a year.

But see, that's what makes the comparison imbalanced, you're comparing the price of a year-old thing to the price of a day-old thing. As you elaborate:

The 980 also came out at a much higher price than currently, so for all the people with a GTX 980Ti, it might be more interesting to wait out for that price drop or to wait for a possible GTX 1080Ti.

For the sake of tracking generational improvement, you have to "control" for the age of the part and compare day-one prices and speeds. Prices drop on components over time because their useful life is diminished, and that has to be priced in to the cost. But that has nothing to do with the generational improvement, that's just market effects.

Consequentially, given that the GTX 980 landed at $549, the 980 Ti at $649, and the 1080 at $599, the 1080 is equidistant price-wise from either card, and thus should be considered to be equally comparable to either. It's not any more comparable to the 980 than the 980 Ti.

So, if we control for price, we get this:

980: 4610 GFLOPS / $549 = 8.39
980 Ti: 5630 GFLOPS / $649 = 8.67
980 SLI: 8298 GFLOPS / $1098 = 7.55
1080: 8900 GFLOPS / $599 = 14.85

(Prices are MSRP at launch)

As you can see, the 1080 actually is in striking distance of 2x the 980 in perf-per-dollar terms. It's closer still to 2x when you're comparing to 980 SLI, which I'd say is the fairer comparison in this instance. And the 980 Ti wasn't all that much better than the 980, so the 1080 still stands out.

---

HOWEVER... For perf-per-dollar, or "value" comparisons, the methodof analysis changes because you aren't asking what the improvement is, you're asking about performance for each individual solution in absolute terms. Which, you can then compare, of course, to see which is the "value" buy, but that's very different from generational improvement (even if it seems the same) because the variable pricing makes the comparison dynamic.

I'm having fun with this, though, so let's take a look at a straight-up value analysis. If we do that, but control for price (which is to say, take the raw compute performance and then divide by the lowest price available now), you get:

980: 4610 GFLOPS / $440 = 10.47
980 Ti: 5630 GFLOPS / $530 = 10.62
980 SLI: 8298 GFLOPS / $880 = 9.42
1080: 8900 GFLOPS / $599 = 14.85

(Prices are from Newegg and exclude shipping; I just looked for the best deal for a given card)

Much closer! This tells us that, assuming stock clocks, the GTX 1080 today provides:
  • A 42% improvement in perf-per-dollar over the 980
  • A 40% improvement in perf-per-dollar over the 980 Ti
  • A 58% improvement in perf-per-dollar over 980 SLI
In other words, in perf-per-dollar terms, you ain't doubling the 980. These are still pretty big improvements, in my opinion, but they're a lot less dramatic than "double".

Another fun thing we can do is work backwards and figure out what the cost of a particular solution would have to be, to match the perf-per-dollar of the 1080. That looks like this:

980: 4610 GFLOPS / 14.85 = $310
980 Ti: 5630 GFLOPS / 14.85 = $379
980 SLI: 8298 GFLOPS / 14.85 = $559

One final note: this value analysis ignores all the other improvements that NVIDIA is bringing, and lives in raw compute rather than strictly gaming performance, so it needs to be contextualized. It ignores the sole 8-pin connector, it ignores whatever technologies won't be backwards compatible, it ignores the fact that Maxwell is older, and that your ceiling for expansion is much lower, and so forth. It ignores the memory performance, which has nothing to do with raw compute but would manifest itself in performance improvements in some games. And it makes some assumptions about scaling and so forth. So none of this is precise - I'll probably repeat the calculations once we have broad comparisons from testers - but it should be pretty close.

---

A GTX 980 vs GTX 1080 with 65% speed increase sounds like an awesome upgrade.
A GTX 980Ti vs GTX 1080 with 20% speed increase sounds much less enticing.
Especially since a second-hand GTX 980Ti seems to go at about 2/3rds the GTX 1080's price.

The actual improvements will be larger. If I put exact figures on this, with respect to raw compute:
  • The GTX 1080 is 93% faster than the GTX 980
  • The GTX 1080 is 66% faster than the GTX 980 Ti
  • The GTX 1080 is 7% faster than GTX 980's in SLI
The gaming benchmarks will almost certainly give us narrower improvements, at least for the single cards, but we'll have to wait for those.

Some Polaris specs have leaked (unconfirmed). As previously thought, they will not be competing with the 1070 and 1080.

The interesting part though is that there is rumoured to be a card with a TDP of 50W offering 2.5TFLOPs of compute performance. This should make a nice PCIe Powered card, great for SFF PC's.

That's what I'm hoping for! A lot of people will find that incredibly useful, especially when you're talking ~6L and below :)
 
  • Like
Reactions: K888D

Soul_Est

SFF Guru
Moderator
Silver Supporter
Feb 12, 2016
1,480
1,872
I mean, I get what you're saying, it's just that I don't think you can really blame NVIDIA or AMD for this dynamic. They aren't obligated in any way to disclose what they're working on or when it's released, and I don't think anyone would argue as such. Companies are free to sell what they have for as much as they want, and share information as they choose, so long as it's honest. The same goes for people.

Consequently, if it's anyone's "fault" that someone bought a 970 or 980 (or even a Titan X) a few weeks ago, well, the only other entity in that exchange is the buyer. The buyer made the choice to purchase something, and it's the buyer's responsibility to inform their own purchases. If they didn't know an update was coming even after leaks and rumors and everything - all of which a quick Google search would have shown - then that's on them. Just as if I buy a crappy monitor because I didn't spend five minutes reading professional reviews, then that's on me, as well. The information was freely and readily available, but the buyer didn't perform even basic due diligence before spending hundreds and hundreds of dollars.

The other thing, too, is that it isn't as if buying a card just before a refresh is bad in every way, either - if those folks bought those cards three months ago, then hey, they would have had an upgraded experience for months now. That isn't worth nothing! Even if it isn't as ideal as waiting in the long run, they couldn't even argue that they didn't get something they otherwise wouldn't have. So even if I feel bad for them that they didn't make the best purchasing decision they could, I'm not going to feel like they were cheated, or place the responsibility for that on anyone else.



Oh, for sure. I'm sure I'll get a lot of heat for saying this, but AMD's current lineup is one of the most compromised in recent memory - it's a hodgepodge of re-branded old cards and "flagships" that have poorly chosen compromises, from 4GB of VRAM to essentially mandatory use of AIO's. And I'd bet that most folks at AMD would agree with that, too, behind closed doors, because they've been saying all along that the future is Polaris and HBM2 and 14nm and everything. The problem is, people are buying cards right now, and presently on the AMD side we're in the ugly transitional period.

AMD has paid dearly for that - 3 out of 4 graphics cards sold today are NVIDIA. That's not because 75% of the market is in the tank for NVIDIA. That's because NVIDIA's products are better for most people currently.



Or they wanted to spread out releases across their multi-year timeline (desirable given how hard generational leaps are becoming), thought they'd have more time to respond to NVIDIA, and realized that they didn't. But even then, they're having to ramp up production in half the time, compared to a few days ago. Assuming that this rumor is true, AMD's got their work cut out for them.
I believe another issue with this are actions that NVIDIA have taken similar to Intel. GameWorks has been damaging not only to AMD, but to NVIDIA as well. They have also pushed their own proprietary technologies instead of open sourcing their technologies or embracing and extending open source technologies.
 

EdZ

Virtual Realist
Gold Supporter
May 11, 2015
1,578
2,107
Looks like the VR improvements for Pascal are the same case as those for Maxwell: they need to be specifically implemented by the game/engine developer to have any effect (rather than being 'automatic' driver-level improvements). Thus far, VR SLI and Multi-Resolution Shading have had an impact of precisely: dick. I'm not expecting this to change in the near future for Pascal's features, much as I'd like otherwise.
 

PlayfulPhoenix

Founder of SFF.N
Original poster
SFFLAB
Chimera Industries
Moderator
Gold Supporter
Feb 22, 2015
1,049
1,960
Looks like the VR improvements for Pascal are the same case as those for Maxwell: they need to be specifically implemented by the game/engine developer to have any effect (rather than being 'automatic' driver-level improvements). Thus far, VR SLI and Multi-Resolution Shading have had an impact of precisely: dick. I'm not expecting this to change in the near future for Pascal's features, much as I'd like otherwise.

I mean, whatever the general performance improvement is, you'll get for VR at a minimum. Just about matching 980 SLI, and handily beating a 980 Ti and Titan, isn't anything to sneeze at.

NVIDIA claimed something like 3x when using the multi-projection stuff (maybe that was just efficiency though; I forget), but that's easily discounted. I will say that simultaneous multi-projection seems to be to be a more promising technology since it solves a lot of problems that happen to include VR... But proprietary tech has to prove itself before you can really consider it to be impactful. More often than not, it isn't.
 

Phuncz

Lord of the Boards
Editorial Staff
Moderator
Gold Supporter
May 9, 2015
5,183
4,519
It's a mighty impressive card, I'm amazed the performance is actually completely there.
If all AMD rumors are true (no high-end card within 8 months), the GTX 1070 will be another home-run for Nvidia.
 

PlayfulPhoenix

Founder of SFF.N
Original poster
SFFLAB
Chimera Industries
Moderator
Gold Supporter
Feb 22, 2015
1,049
1,960
It's a mighty impressive card, I'm amazed the performance is actually completely there.
If all AMD rumors are true (no high-end card within 8 months), the GTX 1070 will be another home-run for Nvidia.

I'm enthusiastic with some reservations, personally. I'll try to do a write-up alongside some analysis to illustrate either tonight or tomorrow.
 

EdZ

Virtual Realist
Gold Supporter
May 11, 2015
1,578
2,107
I mean, whatever the general performance improvement is, you'll get for VR at a minimum. Just about matching 980 SLI, and handily beating a 980 Ti and Titan, isn't anything to sneeze at.
RoadtoVR have done some VR-specific benchmarking of the 1080. With games targeting a GTX 970 level of performance you generally only see an improvement if the ability exists to ramp setting up well above the expected level. Hopefully as VR becomes more common developers will both start to implement architecture-specific optimisations, and have the funding available to both optimise for the recommended performance level, and look at optimising for higher performance levels.
 

MarcParis

Spatial Philosopher
Apr 1, 2016
3,380
2,387
I'm a bit reserved on this GTX 1080 launch.
Nvidia claimed a lot and they failed on several topics :
  • GTX 1080 is faster than SLI of GTX980? : false in almost all cases
  • Dirty announcement of VR improvement perf >*2 vs Maxwell generation (Pascal is just generating one image and rotate it to adjust to each eye, thanks to async compute. Maxwell was generating 2 images, one for each eye)
  • Competely useless presentation of 200fps on DOOM running Vulkan in ...1080p ultra..where a GTX 970 already reach 110fps in Open GL
  • Annoucement of Overclock above 2Ghz...however, except if you force fan speed to 100% all time, it will throttle down 1.6Ghz quickly..
    However for watercooling, it's great option!..:)
  • On 4K, DirectX12, Nvidia closed the gap and beat AMD Fury X...by a small advantage...not impressive

Clearly GTX 1080 does not really impress me because it's only represent half the potentiel from Pascal with full gpu using HBM 2 (future Pascal Titan X??..:))

Nvidia is applying same strategy as Maxwell 2...GTX 1080 is mid range of what is achievable in Pascal. Nvidia is waiting for AMD answer to release GTX 1080ti or even Titan..:)

So I'm eager to see Vega and Pascal with HBM 2..:)
 

IntoxicatedPuma

Customizer of Titles
Editorial Staff
Silver Supporter
Feb 26, 2016
984
1,248
Some rumors came out at the beginning of the month about a GTX 1060 ti being released, a further cut down GP104 with 192 bit bus. Since I don't have a 4K monitor, I'd be interested in something like this. Nvidia has a huge gap between GTX 960 and GTX 1070 right now - both performance and price (when 1070 releases). A 1060 ti card would be a nice way to fill it - and if the new R9 480 is going to be somewhere around R9 390X performance, Nvidia will need something to compete as GTX 960 can't cut it.
 

EdZ

Virtual Realist
Gold Supporter
May 11, 2015
1,578
2,107
Dirty announcement of VR improvement perf >*2 vs Maxwell generation (Pascal is just generating one image and rotate it to adjust to each eye, thanks to async compute. Maxwell was generating 2 images, one for each eye)
Wrong. Both are generating two unique images per eye. What Nvidia are claiming as a speedup is the ability to draw both eyes with a single set of draw calls rather than pairs of draw calls. This is also a speedup that can be applied in software (and has been implemented experimentally in both Unreal and Unity, under the term Instanced Stereo Rendering).
Annoucement of Overclock above 2Ghz...however, except if you force fan speed to 100% all time, it will throttle down 1.6Ghz quickly.

However for watercooling, it's great option!..:)
Requiring higher fan speeds or alternate cooling for overclocking is expected: that's how things normally work. Expecting massive overclocks on stock cooling on the stock fan profile is having your cake and eating it too. It does show that the 1.6GHz boost clock spec is handled in the steady-state with the stock cooling, rather than them quoting the short-term cold-card figure as the boost clock (as occurred with the R9 290x prior to a BIOS revision that raised fan speeds so retail cards could achieve the specified clock speeds in practice).
On 4K, DirectX12, Nvidia closed the gap and beat AMD Fury X...by a small advantage...not impressive
In which game or games? DX12 is implemented as a follow-on to Mantle in Ashes of the Singularity, and in barely-usable preliminary implementations in other games.
 

MarcParis

Spatial Philosopher
Apr 1, 2016
3,380
2,387
For VR announcement, I'm mainly critizing confusion about performance increase compared to Maxell 2 generation. (And you are right to correct me on my short explanation..:))

For OC, Nope I'm insisting on bad communication. During Official GTX1080 presentation they INSIST, that with founder edition, you can EASILY OC GTX 1080 above to 2GHz...but they forgot to mention counter part (very noisy or very short OC)...once again it's mainly communication that I'm criticizing. OC potential is clearly here...but not with founder edition.

4K, DirectX 12 correctly implemented..you are right I was meaning Ashes of singularity, but also last Hitman...:) As you mention, DirectX 12 is mainly based on AMD Mantle API, especially. And Nvidia Maxwell 2 cards (without async compute) struggles in Directx 12 mode. With GTX1080 Nvidia closed this gap, and slightly beat Fury X. In directX 11 GTX1080 is a true king. In directX 12, currently, GTX1080 is the king, but prince is just behind..:)
 

PlayfulPhoenix

Founder of SFF.N
Original poster
SFFLAB
Chimera Industries
Moderator
Gold Supporter
Feb 22, 2015
1,049
1,960
I'm a bit reserved on this GTX 1080 launch.
Nvidia claimed a lot and they failed on several topics :

GTX 1080 is faster than SLI of GTX980? : false in almost all cases

Eeeeeeh... when they failed, they "failed" by ~10% or less, with some narrow wins. But when they succeeded, it was usually by 80%+, because the game didn't support SLI.

I do think NVIDIA was disingenuous in how they compared the 1080 to 980 SLI, but at the same time, comparing is hard because sometimes the nature of SLI means that the 980 SLI simply can't compete due to the fact that it's SLI and lots of games just suck with SLI, or don't support it at all. And those games matter, because people play them!

In other words, I think NVIDIA's blanket statement that "the GTX 1080 meets or beats 980 SLI" is misleading at best, but I think that a statement along the lines of "failed in almost all cases" is equally misleading in the other direction. Both sentences don't capture the nuance. The more grounded statement to make would be something like "In games where 980 SLI performs well, the GTX 1080 is within ~10% but generally under, and in games where 980 SLI performs poorly, the GTX 1080 usually beats it by a significant margin."

That's a sentence that's going to help people make the right buying decisions. That's not a loaded statement.


Dirty announcement of VR improvement perf >*2 vs Maxwell generation (Pascal is just generating one image and rotate it to adjust to each eye, thanks to async compute. Maxwell was generating 2 images, one for each eye)

I haven't really read up much on the simultaneous multi-projection stuff, but NVIDIA consistently qualified statements with respect to VR performance to indicate that it includes use of their technology. I pointed out as such following their event. I wasn't happy that they were doing it, but I don't think that's "dirty", personally, since they had unqualified statements about general performance, and since they were honest. It's just annoying because the assumption that their new tech will be used is an assumption that's more often wrong than right.


Annoucement of Overclock above 2Ghz...however, except if you force fan speed to 100% all time, it will throttle down 1.6Ghz quickly..

I mean, but you can do that overclock, just look at the chart! 100% fan speed isn't ideal, but it's not dishonest. Telling your GPU to run the fan at 100% is trivial, and some folks won't care about noise.

And by the way, this is on their reference cooler with a single fan, which is perhaps the worst case scenario - cards by other manufacturers will be built to do this with more fans, larger fans, beefier heatsinks, and so forth.

Tom's Hardware themselves said as much, at the end paragraph of that section, and added the context you've removed:

The cooling solution’s not bad, but the GeForce GTX 1080 does take a performance hit when it runs into its thermal limit. We’re looking forward to the partners’ solutions, which we expect to both be quieter and provide better cooling.

Furthermore, you've cherry-picked one result out of many. You have to consider all the results across reviewers in the aggregate. I could just as well point to this image from PCPer and say "well look, this thing overclocks like a champ and noise is a non-issue, just check that fan speed".



You'll notice that I've never looked at one single result and done this in the past. That's because it's a stupid thing to do. Don't do it!

And to be clear, I'm not saying that the result you've shared is bogus - I'm saying that including only it among many results, stripped of the context that Tom's themselves felt was important to add, is not a reasonable thing to do. That's more often than not going to be misleading, and it's a lazy way to make a point at best. Lots of places had lots of better results with respect to overclocking. You've chosen to ignore them.


Clearly GTX 1080 does not really impress me because it's only represent half the potentiel from Pascal with full gpu using HBM 2 (future Pascal Titan X??..:))

If you want to compare a real, tested product against a theoretical product that has no price point, no specifications, no release date, and doesn't exist yet, be my guest :p

Nvidia is applying same strategy as Maxwell 2...GTX 1080 is mid range of what is achievable in Pascal. Nvidia is waiting for AMD answer to release GTX 1080ti or even Titan..:)

If you're saying that past performance is indicative of future results, I hope you're ready to wait over half a year for a card that's going to cost almost double for a ~20% improvement, and another three and a half months for that card to come down in price as a "1080 Ti" but be essentially equivalent to the 1080 in perf-per-dollar terms.

Given that AMD did a great job of competing against NVIDIA that generation, and is likely to cede the entire top half of the market to NVIDIA for months now, I don't think that's a reasonable assumption, but I'm basically shooting in the dark as much as you are, so who knows.


So I'm eager to see Vega and Pascal with HBM 2..:)

Yes, this, very much. Even the most ardent NVIDIA fans should be just as excited. NVIDIA and AMD are at their best when the two are neck-and-neck.
 
  • Like
Reactions: Vittra

EdZ

Virtual Realist
Gold Supporter
May 11, 2015
1,578
2,107
During Official GTX1080 presentation they INSIST, that with founder edition, you can EASILY OC GTX 1080 above to 2GHz...but they forgot to mention counter part (very noisy or very short OC)...once again it's mainly communication that I'm criticizing. OC potential is clearly here...but not with founder edition.
Merely turning up the fan speed certainly seems to fulfil "easily overclocks above 2GHz". No hardware changes, no BIOS changes, no manual voltage tweaking, no manual part binning, etc. Increasing fan speed is pretty damn easy.
4K, DirectX 12 correctly implemented..you are right I was meaning Ashes of singularity, but also last Hitman...:) As you mention, DirectX 12 is mainly based on AMD Mantle API, especially. And Nvidia Maxwell 2 cards (without async compute) struggles in Directx 12 mode. With GTX1080 Nvidia closed this gap, and slightly beat Fury X. In directX 11 GTX1080 is a true king. In directX 12, currently, GTX1080 is the king, but prince is just behind..:)
I'm not sure it's fair to say Maxwell 2 cards 'struggled' with DX12, they just didn't suddenly gain anything. The GTX 9xx series cards performed nearly identically in DX11 and DX12, whilst the GCN cards benefited from a dramatic performance increase in DX12 (from a huge performance deficit in DX11).

DX12 - and Vulkan - move the work of architecture-specific optimisation from the driver developer (GPU vendor) to the game/engine developer. It;s not an automatic speedup, it just moves the workload from one group of people to another. The hope is that while developers will be shouldering a lot of work that was preivously handled by someone else will allow them to further optimise their engine code, with the caveat that developers who do not have deep architecture knowledge will be better off with DX11, or see performance regressions with Dx12, or will rely on Gameworks and other pre-optimised code provided by GPU vendors.
 

MarcParis

Spatial Philosopher
Apr 1, 2016
3,380
2,387
Well, sharing my opinion is hurting people..:)

GTX 1080 is just +20% bonus to current pinnacle card : GTX Titan X...not that impressive again..that's my opinion..and I'm just waiting for REAL pascal Pascal Titan X version that will use HBM2...that's just my comment...don't be so sensitive when I dare moderate arrival of GTX 1080...
 

artimaeus

Apprentice
Apr 13, 2016
30
11
Sharing your opinion isn't hurting people. You have a right to share your opinion and they have a right to criticize it. Simple stuff. No harm no foul.