Stalled Hutzy XS — Ultra Compact Gaming Case (<4L)

Hahutzy

Airflow Optimizer
Original poster
Sep 9, 2015
252
187
Hey guys, been a slow couple of weeks as I'm still waiting for PCIE cables to come in for me to test.

A little design update, mostly done to improve the usability for end-users:

- Removed as much of the 3D printed parts as possible. They were good, but felt too "bits and pieces". Instead I'm trying to make the middle board 1-piece.
- Moved the PCIE slot lower so that more tall cards will be compatible in the future. Exact height is to be determined after I test the PCIE ribbons coming in
- Moved the power button up, so it no longer interlocks with the middleboard
- Added more vent holes to the back of the chassis to promote pushing air out the back

 

Hahutzy

Airflow Optimizer
Original poster
Sep 9, 2015
252
187
My guess would be about 200mm, 170mm for the board to and then some for the cable to bend.

Keeping it at 200mm or below is generally a good idea.

I don't think 200mm is enough; I tried to estimate it before.

You need 170mm for the board, 5-10mm for the female side, and then something like 40mm+ for the male side, because you have to account for the extra height needed to raise the male connector over the motherboard before pushing it down into the slot.

The minimum is somewhere between 200 and 250 AFAIK, but I didn't estimate further because no one makes a cable that's 230mm in length, for example.
 

hardcore_gamer

electronbender
Aug 10, 2016
151
125
I don't think 200mm is enough; I tried to estimate it before.

You need 170mm for the board, 5-10mm for the female side, and then something like 40mm+ for the male side, because you have to account for the extra height needed to raise the male connector over the motherboard before pushing it down into the slot.

The minimum is somewhere between 200 and 250 AFAIK, but I didn't estimate further because no one makes a cable that's 230mm in length, for example.

Is it possible to route the cable underneath the GPU rather than the motherboard so that it doesn't have to go all the way around the motherboard tray? 200mm is typically the max length without any performance degradation. As the PCIe 3.0 baseband signalling frequency is 8Ghz, you may see higher BER beyond 200mm.
 

Hahutzy

Airflow Optimizer
Original poster
Sep 9, 2015
252
187
Is it possible to route the cable underneath the GPU rather than the motherboard so that it doesn't have to go all the way around the motherboard tray? 200mm is typically the max length without any performance degradation. As the PCIe 3.0 baseband signalling frequency is 8Ghz, you may see higher BER beyond 200mm.

Regardless of how you run it in the current layout, you will still have to run the length of the motherboard because the GPU connector and the motherboard connector are on the opposite ends.

Yes, 200mm and under is what a lot of manufacturers have told me. But unless the GPU is moved up in the layout, 200mm doesn't seem achievable. And moving the GPU up would cause a lot of GPUs to be not compatible.

Li-Heat, Lian Li and 3M seem to be the ones that can run a cable longer than 200mm and still retain the 3.0 16X speeds, so I'm looking into it in that regard. But their cables also come with another set of problems specific to back-to-back layouts.
 

hardcore_gamer

electronbender
Aug 10, 2016
151
125
Regardless of how you run it in the current layout, you will still have to run the length of the motherboard because the GPU connector and the motherboard connector are on the opposite ends.

In that case (no pun intended), try to keep it as short as possible (may be 250mm?). LiHeat's cable @iFreilicht evaluated for his build seems to be a good option here as it can protect the signal lines from CPU bracket and other passive components on the back side of the MB.
 

hardcore_gamer

electronbender
Aug 10, 2016
151
125
Could you elaborate on that? What is BER and why would it only happen above 200mm?

For LVDS ( low voltage differential signalling), it is a good practice to keep the maximum cable length around 10 times the upper cutoff wavelength. The Manchester line code used in PCIe gen 3 has a base-band frequency of 8Ghz and the upper cut off around 16Ghz. This gives a wavelength of 18.75mm at the first upper cut of frequency, and 10x wavelength is 18.75cm. That's why most manufacturers keep the max cable length around 200mm (slightly higher, but rounded figure). BER (bit error rate) is a figure of merit used to describe the quality of signal transmission. Signalling errors can happen at any cable length, but longer cables have higher (exponentially increasing) BER. That being said, a well designed cable can have longer lengths by reducing signal spreading, inter symbol interference and external noise. Unfortunately for us, PCIe extenders are a niche market and few manufacturers provide data sheets or eye patterns of the cables they manufacture. What we can do is, test the complete PC setup with and without the extender, and compare the FPS results and also look for sudden spikes in rendering latency.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
While beyond my nonexisting coding chops to create, would a CUDA/OpenCL program that repeatedly shuttles a known dataset back and forth between system memory and GPU memory, monitoring for errors at either end, be a reasonable way to watch for bit errors?
 
  • Like
Reactions: hardcore_gamer

hardcore_gamer

electronbender
Aug 10, 2016
151
125
While beyond my nonexisting coding chops to create, would a CUDA/OpenCL program that repeatedly shuttles a known dataset back and forth between system memory and GPU memory, monitoring for errors at either end, be a reasonable way to watch for bit errors?

Bit errors in PCIe physical layer won't make it to the memory. The PHY error detection will simply cause a retransmission if data packets arrive faulty. However, this error detect - retransmit cycle wastes bandwidth and may cause performance issues.

Errors in the memory are mainly due to soft errors that have nothing to do with cable length. These soft errors are caused by particles from stray radio active materials in packaging or cosmic rays striking a DRAM cell and flipping a "1" to "0" or vice versa. These events are relatively rare and not a big issue in PCs. Servers, however, take this seriously and use ECC for both VRAM and main memory.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Ah, I;d forgotten PCIe has per-packet CRC, so logging errors that reach memory would only capture multi-bit errors from egregiously bad links. Logging raw bandwidth and watching for drops would be less reliable (vulnerable to other system processes), but would give a measure of the practical upshot of poor link quality, which is pretty much what we want a measure of anyway. Running the test on a pared-down Linux install would mitigate that somewhat, but is probably unnecessary.

::EDIT:: Actually, doing some digging it looks like Linux is already set up to log AER events directly!

As you say, bit-flip errors are so rare that they are not worth considering for vRAM or for system RAM (until desktop machines start approaching triple-digit GB capacities, anyway).
 
  • Like
Reactions: hardcore_gamer

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
For LVDS ( low voltage differential signalling), it is a good practice to keep the maximum cable length around 10 times the upper cutoff wavelength. The Manchester line code used in PCIe gen 3 has a base-band frequency of 8Ghz and the upper cut off around 16Ghz. This gives a wavelength of 18.75mm at the first upper cut of frequency, and 10x wavelength is 18.75cm. That's why most manufacturers keep the max cable length around 200mm (slightly higher, but rounded figure).

Hm, maybe designing a case that relies on a 320mm long PCIe riser is a bad idea then... That would also explain why the 16cm long non-shielded HDPLEX riser worked without an issue, but I could not get two of them to run in series.
 

hardcore_gamer

electronbender
Aug 10, 2016
151
125
Hm, maybe designing a case that relies on a 320mm long PCIe riser is a bad idea then... That would also explain why the 16cm long non-shielded HDPLEX riser worked without an issue, but I could not get two of them to run in series.

It's not a bad idea as long as it can provide enough bandwidth for the highest end GPUs supported by the case (gtx 1070, R9 nano etc.). Try different cables, run multiple benchmarks / games with and without the extender and compare the results. If there's no noticeable performance hits, you're on the right track.

Two extenders in series can cause additional issues because of the abrupt changes in channel characteristics where they meet. This can cause signal reflection and comb filtering. It's better to stick with just one good extender cable.
 
  • Like
Reactions: EdZ and iFreilicht

hardcore_gamer

electronbender
Aug 10, 2016
151
125
@iFreilicht @Hahutzy

Saw this on a Chinese vendor's website:



This has an an 80mm fan. That's the only good news. Unfortunately, the PSU seems to be of very bad quality.
"300W" in the name, 250W max power, 220W rated power.
"The continuous max. DC output power shall not exceed 200W" , Burn in at "Full load,2 Hour" (WTF)
70% efficiency and 240mV ripple on the 12V line

Well, at least we know that this kind of mechanical configuration is possible.
 

Hahutzy

Airflow Optimizer
Original poster
Sep 9, 2015
252
187
@iFreilicht @Hahutzy

Saw this on a Chinese vendor's website:



This has an an 80mm fan. That's the only good news. Unfortunately, the PSU seems to be of very bad quality.
"300W" in the name, 250W max power, 220W rated power.
"The continuous max. DC output power shall not exceed 200W" , Burn in at "Full load,2 Hour" (WTF)
70% efficiency and 240mV ripple on the 12V line

Well, at least we know that this kind of mechanical configuration is possible.

Also worth noting is that the C14 connector, mounting holes, and dimensions of this power supply all do not conform to FlexATX standards.