• Save 20% on ALL SFF Network merch, until Dec 31st! Use code SFF2024 at checkout. Click here!

GPU VEGA NANO

Mar 6, 2017
501
454
Made a 250mm long Vega Nano mockup.


No more than a modified Fury X model, but assuming that the Vega Nano has the same dimensions, this should be very doable. Of course, it uses a Vega 56 chip and I put 4x8pin for a dual 64 chip because why not :p
 

LukeD

Master of Cramming
Case Designer
Jun 29, 2016
501
1,308
Reminds me of this a little ...

 

Boil

SFF Guru
Nov 11, 2015
1,253
1,094
Maybe my fortitude in relation to conducting extended Google searches is weak, but...

Does anyone have the dimensions of the RX Vega 64 Liquid Cooled card...?

If it is the same shroud as the RX480 Reference (as has been mentioned in reports on the appearance of the RX Vega 64 Reference model), then am I right that it is an inch shorter then the Nvidia GTX reference cards; 9.5" versus 10.5"...?
 
Mar 6, 2017
501
454
Maybe my fortitude in relation to conducting extended Google searches is weak, but...

Does anyone have the dimensions of the RX Vega 64 Liquid Cooled card...?

If it is the same shroud as the RX480 Reference (as has been mentioned in reports on the appearance of the RX Vega 64 Reference model), then am I right that it is an inch shorter then the Nvidia GTX reference cards; 9.5" versus 10.5"...?

Liquid is 282mm, air is 279mm. Not sure what that 3mm difference is for.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Perhaps the GDDR5x and GDDR6 memory makers (Micron) don't want to let go, as well. Let's just hope HBM becomes mainstream and cheap from now on.
As HBM has been increasing bandwidth per pin, so has GDDR. Currently, you need to be packing 3 or more HBM stacks onto your interposer before you actually have more bandwidth available than a GDDR array. Any fewer, then you're still paying the premium for more expensive die stacks and fabbing an interposer, but not gaining any performance.
In terms of power consumption we can't really do an apples-to-apples comparison (Vega and Fury are far behind Pascal in perf/watt, but we don't know how much better/worse that situation would be without a non-HBM Vega to test).
In terms of PCB area, the advantage is present but not huge. Compare the Vega Nano mockup to the 1080 Mini:

(Alignment done visually using the PCIe card-edge and retention bracket screw. Don't use it to design a case!)
The height increases by ~7mm (though unlike Fury Nano, the Vega Nano has lost the end-positioned PEG connector), and the length by ~20mm, but the 1080 Mini is still the same length PCB as an ITX board. The Nano's cooler is smaller, but also rated to dissipate 40W less heat (150W vs. 180W). Depending on how well Vega scales down in performance as it's undervolted and downclocked (and by how much the finalised Vega Nano is cut down on-die), it may be that a more realistic comparison is a 1070, 1060 or even 1050Ti. The 1050Ti is available in a half-hieght form factor, though at only 4G ram (though at those lower performances it may not make a practical difference), but I don;t recall there being any 1070s or 1060s all that much msaller than the 1080 Mini (the stock 1060 is slightly shorter, the PCB being flush with the top of the bracket, but that's about it).
 

darksidecookie

SFF Lingo Aficionado
Feb 1, 2016
115
141
As HBM has been increasing bandwidth per pin, so has GDDR. Currently, you need to be packing 3 or more HBM stacks onto your interposer before you actually have more bandwidth available than a GDDR array. Any fewer, then you're still paying the premium for more expensive die stacks and fabbing an interposer, but not gaining any performance.
In terms of power consumption we can't really do an apples-to-apples comparison (Vega and Fury are far behind Pascal in perf/watt, but we don't know how much better/worse that situation would be without a non-HBM Vega to test).
In terms of PCB area, the advantage is present but not huge. Compare the Vega Nano mockup to the 1080 Mini:

(Alignment done visually using the PCIe card-edge and retention bracket screw. Don't use it to design a case!)
The height increases by ~7mm (though unlike Fury Nano, the Vega Nano has lost the end-positioned PEG connector), and the length by ~20mm, but the 1080 Mini is still the same length PCB as an ITX board. The Nano's cooler is smaller, but also rated to dissipate 40W less heat (150W vs. 180W). Depending on how well Vega scales down in performance as it's undervolted and downclocked (and by how much the finalised Vega Nano is cut down on-die), it may be that a more realistic comparison is a 1070, 1060 or even 1050Ti. The 1050Ti is available in a half-hieght form factor, though at only 4G ram (though at those lower performances it may not make a practical difference), but I don;t recall there being any 1070s or 1060s all that much msaller than the 1080 Mini (the stock 1060 is slightly shorter, the PCB being flush with the top of the bracket, but that's about it).
for power consumption you could take th nvidia quadro gp100 (235w) that used hbm2 memory and compare it against the gtx 1080ti (250w) wich has the same amount of cuda cores BUT has only 11gb of gddr5 memory. But there might be other differences that contribute to the increased powerdraw. So still no apples to apples comparison.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
for power consumption you could take th nvidia quadro gp100 (235w) that used hbm2 memory and compare it against the gtx 1080ti (250w) wich has the same amount of cuda cores BUT has only 11gb of gddr5 memory. But there might be other differences that contribute to the increased powerdraw. So still no apples to apples comparison.
The PCIe Quadro GP100 has a boost clock of 1480MHz, while the 1080Ti has a boost of 1582MHz. 235W:250W is roughly the same ratio as 1480MHz:1582MHz (1.064 and 1.070), so that could cover all the power delta. It's still not quite a perfect comparison: the 1080Ti will clock even higher when it has the headroom, and the GP100 has a bunch of extra die area for to FP64.
 
Mar 6, 2017
501
454
Made a 250mm long Vega Nano mockup.


No more than a modified Fury X model, but assuming that the Vega Nano has the same dimensions, this should be very doable. Of course, it uses a Vega 56 chip and I put 4x8pin for a dual 64 chip because why not :p

I added a waterblock concept, put some USB-C on it and some ventilation on the slot cover.

I'm getting excited for a card that might never exist :/
 
  • Like
Reactions: Phuncz

Boil

SFF Guru
Nov 11, 2015
1,253
1,094
There should be six (6) Mini DisplayPorts, like on the Radeon Pro series of workstation cards...

Man, I want a Radeon Pro SSG...! (but lack a 'spare' 7 grand)

Maybe EKWB will make a water block for that at some point...
 
  • Like
Reactions: SeñorDonut

darksidecookie

SFF Lingo Aficionado
Feb 1, 2016
115
141
in a recent ltt video (amd let me help build their server. or something like that on floatplane) Linus metioned that unline nvidia quadro that recuire nvlink to share vram,the new vega instinct can share and acces vram simple over pcie. So would it be possible that a simple pcie ssd would be all that is required as opposed to an on card m.2 solution?
 

VegetableStu

Shrink Ray Wielder
Aug 18, 2016
1,949
2,619
I believe the SSD portion is a NVMe M.2 device (or two, speculation)...?
I meant like if a full block also cools the m.2 drives, would it (most probably) restrict access to the SSDs because a waterblock is in the way?
So would it be possible that a simple pcie ssd would be all that is required as opposed to an on card m.2 solution?
I remember seeing a pic of a prototype with Samsung Pros in them, but apparently google lost the image ._.
 
  • Like
Reactions: AleksandarK

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
in a recent ltt video (amd let me help build their server. or something like that on floatplane) Linus metioned that unline nvidia quadro that recuire nvlink to share vram,the new vega instinct can share and acces vram simple over pcie. So would it be possible that a simple pcie ssd would be all that is required as opposed to an on card m.2 solution?
AMD is a strong proponent of what they call a Unified Memory Architecture where the CPUs and GPUs all share memory addresses, so any of them can access the memory attached to other devices (rather than how it is traditionally done where the blocks are copied from one memory space to another). Basically it's like how multi socket CPUs can access each other's memory, but extended to other types of processors, so it doesn't make sense for them to restrict that access behind some sort of proprietary protocol.

Anyways. GPUs used for rendering nowadays do draw from the system memory and storage pools. The point of putting the SSDs on the card is that it bypasses the system bus and the CPU. It removes both the latency of requesting access to a remote device, and any bandwidth limitations from trying to get data from the CPU as well as a storage device over the same PCIe lanes at the same time.

Current systems do already have fast storage SSDs, but there's still a bottleneck that isn't there when you directly attach them to the GPU.