The space inefficiency thread

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
Well, this is not Insane for 15 years ago ?
I suppose the latency specs and tolerances of PCIe 1.0 were far more lax than what we're used to these days. Oh well. It's not like PCs of that era were stable to begin with, so I can't imagine this making things significantly worse.
 

riba2233

Shrink Ray Wielder
SFF Time
Jan 2, 2019
1,735
2,282
www.sfftime.com
So, what, the latter 8 PCIe lanes on the first x16 slot are routed through this small AIC and then either to the first or second slot depending on which way the thingy is inserted? That is insane. I mean, that would add significant latency to half the PCIe signalling to the first slot. I can't imagine that not causing BSODs left, right and center.

It worked fine, also had 6600GT SLI ;)
 
  • Like
Reactions: fabio

loader963

King of Cable Management
Jan 21, 2017
662
569
So, what, the latter 8 PCIe lanes on the first x16 slot are routed through this small AIC and then either to the first or second slot depending on which way the thingy is inserted? That is insane. I mean, that would add significant latency to half the PCIe signalling to the first slot. I can't imagine that not causing BSODs left, right and center.


Not really that crazy and I doubt the latency part. There are and have been motherboards with plx chips that are basically auto switches that can “turn” x16 lanes into x32 lanes using a similar principle. Some people use them to run 4 cards on intel mainstream rigs that add minimal to negligent latency or performance losses. But they are expensive compared to regular motherboards.
 

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
Not really that crazy and I doubt the latency part. There are and have been motherboards with plx chips that are basically auto switches that can “turn” x16 lanes into x32 lanes using a similar principle. Some people use them to run 4 cards on intel mainstream rigs that add minimal to negligent latency or performance losses. But they are expensive compared to regular motherboards.
That doesn't really apply to what I was saying. PLX chips take a given "bundle" of PCIe lanes from whatever source and uses them as an uplink for its own switched lanes, all of which will have the same latency (at least per connected device). My guess from looking at the hardware SLI switch pictured above was that it only switched the latter 8 lanes of the first slot (as those are the only ones that need switching for SLI, and this would drastically simplify routing to the switch), leaving the first 8 directly connected to the first slot. If this was the case, the latter 8 lanes would have significantly longer traces (down past the PCIe slot, down to the switch socket, onto the switch, back off again, up to the PCIe slot) than the former 8 (down to the socket, done) when used as a monolithic x16 block. Of course, the switch might switch all 16 (it certainly looks to have enough pins), but I assumed that would complicate routing too much to be feasible in that position. I might of course be wrong. I don't expect this was a cheap board.
 

riba2233

Shrink Ray Wielder
SFF Time
Jan 2, 2019
1,735
2,282
www.sfftime.com
That doesn't really apply to what I was saying. PLX chips take a given "bundle" of PCIe lanes from whatever source and uses them as an uplink for its own switched lanes, all of which will have the same latency (at least per connected device). My guess from looking at the hardware SLI switch pictured above was that it only switched the latter 8 lanes of the first slot (as those are the only ones that need switching for SLI, and this would drastically simplify routing to the switch), leaving the first 8 directly connected to the first slot. If this was the case, the latter 8 lanes would have significantly longer traces (down past the PCIe slot, down to the switch socket, onto the switch, back off again, up to the PCIe slot) than the former 8 (down to the socket, done) when used as a monolithic x16 block. Of course, the switch might switch all 16 (it certainly looks to have enough pins), but I assumed that would complicate routing too much to be feasible in that position. I might of course be wrong. I don't expect this was a cheap board.

It was top of the line board :)

The other top of the line board was DFI lan party, which had a lot of jumpers for switching pcie lanes:

 
  • Like
Reactions: Valantar and fabio

Solo

King of Cable Management
Nov 18, 2017
895
1,507
I'm posting this guy's entire PC Part Picker setup because none of it makes any god damn sense.

 

Stevo_

Master of Cramming
Jul 2, 2015
449
304
I actually think those are the monitor's bezels (look closely at the reflections along the edges). Speakers, maybe?
Yeah speakers, I have the same HP monitor(actually there was a Beats versions and the Harmon Kardon I have not sure which flavor that one is).
 
  • Like
Reactions: fabio

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225


I hate this so much. I hate everything.
1: Is that a prebuilt? Who puts an ODD in a PC these days?
2: Did they move the GPU to the second x16 slot to make the case look somewhat less ridiculously empty? It didn't work, but I guess it was worth a try.
3: Did someone attack the front fan with a can of neon orange spray paint (the type road workers and similar use), or is that the worst implemented RGB ever?
4: I count 10 PCIe slot covers. So that's a full-tower case just for the heck of it.
4: WHY???????

This just makes me sad.
 
  • Like
Reactions: owliwar