Random Hardware Thoughts

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Hello!

This will be a random segment posted at any time I have an idea where I discuss random thoughts about PC hardware and ideas I have. Feel free to join in on the discussion! Anything is fair game here: keyboards, monitors, cases, motherboards, and other things that may come to mind.
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Today's Random Hardware Thought:
Test Benches and why ITX test benches make lots of sense.

I personally think that for non-SLI GPU testing and for storage testing, you really don't need an ATX test bench. ITX can handle a GPU just fine, and M.2/SATA can also be handled. It may be a hassle unplugging SATA/M.2 devices constantly, but what you save in size somewhat compromises. Here are my thoughts on various testing scenarios and how ITX could adapt to them..
  • CPU testing: Anything but X399 acts fine, but VRMs are a heavy concern.
  • Motherboard testing: Anything ITX (duh)
  • Cooler testing: Yeah, you can put a space heater like a 7980XE on ITX, which IS expensive, but it's an option...
  • GPU testing: As long as it's a single card, it works...
  • Storage testing: Just fine, as long as RAID isn't being tested or you aren't testing crazy server things.
  • PSU testing: Definitely. You can test ATX units as well, with a power sucker such as an overclocked 7980XE and a Vega 64.
  • Fan testing: Sure...
  • RAM testing: Yes! Dual channel is OK, but quad-channel will only work with SODIMMs.
However, overclocking testing will suck on ITX boards, due to somewhat poor VRMs on said boards.


Just my two cents for today!
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Today's thought:

Why (In my opinion) fully modular power supplies are kinda pointless, and why SFX units should be semi-modular.

Firstly, fully modular ATX. There really is no reason why you would unplug the 24-pin/8-pin connectors from a modular unit (other than custom cables) from a power supply, and so therefore you really don't need the ability to unplug them. Also, fully modular units usually cost a bit more, for essentially the same feature set.

Secondly, semi-modular SFX units. Some cases can't use modular SFX units due to the length required to add said modularity. So, it would be beneficial to have a semi-modular SFX unit with the attached cables indented into the chassis of the power supply for slightly better clearance and a smaller modular section with maybe 2 SATA chains and 2 PCIe chains.

Just my two cents.
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
Firstly, fully modular ATX. There really is no reason why you would unplug the 24-pin/8-pin connectors from a modular unit (other than custom cables) from a power supply, and so therefore you really don't need the ability to unplug them. Also, fully modular units usually cost a bit more, for essentially the same feature set.

Ive had occasion to do this more than once. Mostly when replacing a dead PSU w/ a replacement unit of the same model / pinout in heavily wired system (one of my servers come to mind. Also when doing maintenance its sometimes convenient to be able to pull the PSU w/out having to tear down your carefully crafted cable management.

Secondly, semi-modular SFX units. Some cases can't use modular SFX units due to the length required to add said modularity. So, it would be beneficial to have a semi-modular SFX unit with the attached cables indented into the chassis of the power supply for slightly better clearance and a smaller modular section with maybe 2 SATA chains and 2 PCIe chains.

Those cases are becoming fewer and fewer every year.
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Ive had occasion to do this more than once. Mostly when replacing a dead PSU w/ a replacement unit of the same model / pinout in heavily wired system (one of my servers come to mind. Also when doing maintenance its sometimes convenient to be able to pull the PSU w/out having to tear down your carefully crafted cable management.



Those cases are becoming fewer and fewer every year.
Thank you for your comments. I will say that on my experience with 1U (very old, at least) servers, generally, the power supply has only 1 cable, the 20-pin cable, or at least on my Dell PowerEdge 650. I did search for some other servers, and for those, I really do agree with your points. Especially with the compute card wiring and SATA wiring, I think modular units would be great. I still wish we could return to the days of the easy-to-remove one-cable PSUs on older models, at the advantage of cable management and at the disadvantage of cost of breakout boards and power delivery.

The cases I am talking about would be me looking at the Silverstone SG13 and the Jonsbo C2. A shorter unit with detachable cables would help immensely inside such a small case, and while custom DC-DC solutions could work, some people want a power supply from a established brand within a defined form factor.

New thought, somewhat related:
Nothing against KMPKT, but for beginners in the PC space, when they go on to forums and ask "What brand should I buy PSUs from?" they generally are presented with SeaSonic, EVGA, Corsair, and the like, and the advice "don't buy from a brand you haven't heard of". Naturally, most beginners will not have researched KMPKT, and will likely take a look at the name and thumbnail and just skip over it (due to the increasing lack of research in consumer spaces (In my experience) when curious about a product.)
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Today, even more of this disaster...

What if there was a X399 ITX board?
Essentially, due to the sheer size of the socket, in my vision, have ribbon cables off the side of the board for breakouts to front panel USB, PCIe, etc. RAM would also be in a breakout, with SODIMMs in a 2.5" drive compatible sled. VRMs would be mounted as cards to the front and rear of the board, like the X299 ITX board from ASRock. Finally, PCIe would also be a breakout.
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
RAM would also be in a breakout, with SODIMMs in a 2.5" drive compatible sled.

This wont work. There are limitations on the traces between RAM and CPU, these include making them all equal length, it also includes making them very short. Adding additional connectors and cables also introduces reflections and latency respectively. Whilst there are systems out there that have high speed board to board interconnects and the RAM on a daughterboard they also tend to be somewhat niche and cost more than a Tesla model 3. Also, your RAM might get super toasty shoved into the space of a 2.5" drive bay, even if you use right angle slots to get around the height mismatch.

Finally, PCIe would also be a breakout.
Whilst this is somewhat doable all of the above still applies. The best PCIe flexi risers right now use micro coaxial cable and dont stand up super well to being bent too many times. Also micro coax is super expensive. Your adding cost to the product, cost to your QA, your return rate goes up, you also have to then stock this part for sale separately so you can sell it to your customers that are willing to hand over hard earned to replace 'em.

Also there is the case of having to do a metric fuckton of user education before anyone will buy your product which is a lost battle before you even start fighting. It will get panned by reviewers too.
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
This wont work. There are limitations on the traces between RAM and CPU, these include making them all equal length, it also includes making them very short. Adding additional connectors and cables also introduces reflections and latency respectively. Whilst there are systems out there that have high speed board to board interconnects and the RAM on a daughterboard they also tend to be somewhat niche and cost more than a Tesla model 3. Also, your RAM might get super toasty shoved into the space of a 2.5" drive bay, even if you use right angle slots to get around the height mismatch.


Whilst this is somewhat doable all of the above still applies. The best PCIe flexi risers right now use micro coaxial cable and dont stand up super well to being bent too many times. Also micro coax is super expensive. Your adding cost to the product, cost to your QA, your return rate goes up, you also have to then stock this part for sale separately so you can sell it to your customers that are willing to hand over hard earned to replace 'em.

Also there is the case of having to do a metric fuckton of user education before anyone will buy your product which is a lost battle before you even start fighting. It will get panned by reviewers too.

I do see logic in all of these points, but it is just an IDEA, feasible or not. Thank you for describing to me how each of these devices would become an issue in the long run. Sometimes, I just get completely lost in whatever fantasies I may have about hardware. I have learned many lessons from your comment, and I will take these into consideration in the future when discussing board concepts.

The RAM idea was doomed from the start, but I feel like PCIe could be a smaller connector (shrunk and positioned vertically) on the side of the board, similar to the PCIe slot on a GPU. My Dell PowerEdge uses a similar mounting system for the PCI card riser stack.

Finally, on user education, most people who would be interested in such a niche product would either not bother or at least try to educate themselves about cabling beforehand. For example, the PCIe mount would be foolproof (only one place for it to go), the front panel breakout ribbon could have a symbol printed on the cable to match up with a symbol on the motherboard, and other cables could have similar systems.

Reviews would be concerning to me, but I hope that at least some reviewers see it as an attempt to make an innovative product, practical or not. Someone has got to try it before everyone else, right? Look at the LG Prada. It's a phone not many people know about (it failed miserably), yet it was the first phone with a capacitive touchscreen. Then others like Apple, Samsung, and LG themselves refined the technology until it could be used more widely. Now, I'm not saying that my idea is/will be the best thing since sliced bread, but reviewers would most likely see potential in a different way of making a motherboard. As a more PC related example, look at the X99E-ITX/ac. Some forum posters looked at it as a waste and a stupid product, but ASRock learned from their mistakes and debuted the X299-E ITX. They improved cooler compatibility, added RAM support, and refined the product overall.

I hope that you see that I am not trying to say that you're wrong and I'm right, but that you have some good points and I have some (fewer) good points.
Thanks for the feedback!
-el01
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
Finally, on user education, most people who would be interested in such a niche product would either not bother or at least try to educate themselves about cabling beforehand. For example, the PCIe mount would be foolproof (only one place for it to go), the front panel breakout ribbon could have a symbol printed on the cable to match up with a symbol on the motherboard, and other cables could have similar systems.

By user education I mean convincing people the idea is viable at all. Ask any of the case makers here about what it takes to educate users. Concepts like "smaller cases mean less air volume to cycle means you dont actually need 15 fans". You get looked at like you just discovered fire half the time.

Someone has got to try it before everyone else, right? Look at the LG Prada
The LG Prada was never intended to be a high volume device, it was intended to be a show case product. Whilst they were somewhat innovating on the touch screen (and I use that term super loosely here) by keeping it as a lower volume hero product they were able to skirt around most user education problems by making it desirable in other ways (the Prada branding).

My point here isnt to shoot you down, its to explain to you the realities and limitations of technology as they exist. None of your ideas are new, they have all been done to death in threads over and over again for years. That being said, it wasnt all that long ago that 10Gb ethernet over copper was an insurmountable problem.
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
By user education I mean convincing people the idea is viable at all. Ask any of the case makers here about what it takes to educate users. Concepts like "smaller cases mean less air volume to cycle means you dont actually need 15 fans". You get looked at like you just discovered fire half the time.


The LG Prada was never intended to be a high volume device, it was intended to be a show case product. Whilst they were somewhat innovating on the touch screen (and I use that term super loosely here) by keeping it as a lower volume hero product they were able to skirt around most user education problems by making it desirable in other ways (the Prada branding).

My point here isnt to shoot you down, its to explain to you the realities and limitations of technology as they exist. None of your ideas are new, they have all been done to death in threads over and over again for years. That being said, it wasnt all that long ago that 10Gb ethernet over copper was an insurmountable problem.
Thank you for your words of reason. I will try to consider most of these issues in future discussion. Thanks!
-el01
 
  • Like
Reactions: Biowarejak

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Random motherboard idea of today: (2 actually):
Custom ultra-SFF NAS boards, because pre-built NAS solutions are kinda expensive...

Idea #1:
For low-bandwidth users, use something similar to a Raspberry Pi, with around 4 SATA ports on it (wired as USB), and external Ethernet, 1 USB port, HDMI, DC in.
The SATA could also be a breakout with SATA ports soldered on for you, so you can plug drives in to the breakout instead of running SATA power/SATA Data cables everywhere.
Low-bandwidth comes from the fact that the drives are connected via USB.

Idea #2:
An 8x10 cm board with the following:
https://www.intel.com/content/www/us/en/products/processors/atom/c-series/c3308.html
  • A soldered low-power Intel processor (this one specifically due to 6 SATA ports already onboard, integrated Ethernet controller, and high memory capacity OR
  • An ARM processor of some sort
  • A few gigs of soldered-on eMMC for an operating system
  • A micro-SD card slot onboard in case the operating system requires it
  • 2 horizontal SODIMM slots
  • 3 fan headers (really optional)
  • Headers for front panel power button, status LEDs, etc.
  • USB-A (2)
  • VGA+HDMI
  • Audio/Mic
  • Ethernet and DC-In
  • RAID controller of some sort (optional to keep costs down)
  • 6 SATA ports
  • Custom SATA power adapter connectors
  • Edge connector on side of board to an ITX-form-factor adapter with more SATA and potentially more features.
How to keep costs down(feasibility):
  • HDMI can be removed if necessary
  • MicroSD can be removed if necessary
  • You could arguably remove all ports except for 1 USB and instead add WiFi functionality
  • Remove ITX breakout capability
  • Decrease number of SATA ports
  • Decrease number of RAM slots
  • Cost-down the processor (ARM?)
Consumer education (feasibility):
It works basically the same as an STX board, just without MXM. Arguably, the most difficult thing to do would be plugging in drives.

Reliability (feasibility):
The least reliable thing here would most likely be the soldered eMMC, but if a microSD card slot is available, then hopefully nothing bad will happen...

Is there a market?
I would say yes: Most home enthusiast users want a small-form-factor (non-rack-mount) NAS, and the cost-effectiveness compared to buying a Synology setup would be better. Also, as Gamers Nexus detailed in their video, brands such as Synology use proprietary parts, and it was through a stroke of luck that Steve was able to power the system off a PC power supply, and that the internal power supply was only dead on the 24-pin connector.
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Today's thought: Minor rant on the new MacBook Pro
Source:
https://www.apple.com/newsroom/2018...faster-performance-and-new-features-for-pros/

Disclaimers:
If you have a Macbook and like it, I have nothing against you, it's just that I kinda dislike some of Apple's products and business practices, along with the naive Mac users who are thoroughly uneducated about technology due to Macs being so easy to use.

Paragraph-by-paragraph analysis:
Cupertino, California — Apple today updated MacBook Pro with faster performance and new pro features, making it the most advanced Mac notebook ever. The new MacBook Pro models with Touch Bar feature 8th-generation Intel Core processors, with 6-core on the 15-inch model for up to 70 percent faster performance and quad-core on the 13-inch model for up to two times faster performance — ideal for manipulating large data sets, performing complex simulations, creating multi-track audio projects or doing advanced image processing or film editing.
Already the most popular notebook for developers around the world, the new MacBook Pro can compile code faster and run multiple virtual machines and test environments easier than before. Additional updates include support for up to 32GB of memory, a True Tone display and an improved third-generation keyboard for quieter typing. And with its powerful Radeon Pro graphics, large Force Touch trackpad, revolutionary Touch Bar and Touch ID, dynamic stereo speakers, quiet Apple-designed cooling system and Thunderbolt 3 for data transfer, charging and connecting up to two 5K displays or four external GPUs, it’s the ultimate pro notebook.

Faster performance and new pro features: I see no "pro features" added now... Seriously. The Radeon Pro graphics is just a rebranded version of the RX-series to make average users feel more "pro" than they really are. The Force Touch Trackpad is kinda cool, but I really see no point in it considering that there are free, ad-free apps for Windows and Android that turn your phone into an excellent trackpad. I use my Xperia X as one all the time. Force touch has limited applications for me. I really don't see the point of having to use force to press down on the trackpad when there's such thing as right click. Also, I personally don't find large trackpads enjoyable to use (this is opinion). I prefer a trackpoint nipple. The Touch Bar is not revolutionary. It's the same one as last year, and really doesn't help everyday use in a worthwhile way with many apps. The ASUS touchscreen trackpad is revolutionary, on the other hand. Professionals can actually multi-task with it, unlike the little screen at the top of the MacBook keyboard. Touch ID has been done with Windows Hello for years, and was on the previous MacBook Pro. The "dynamic" stereo speakers are actually kinda good, but Windows OEMs are catching up, with Xiaomi's Mi Book Pro having very nice speakers as an example. The "quiet" cooling system is most likely last year's heatsink but barely within Intel spec. As usual, the MacBook will most likely thermal throttle. Thunderbolt 3 is also nothing new, and many Windows "pro" PCs have it. Not 4 ports, sure, but one or two is usually enough. And, Windows PCs have actual ports, such as USB and HDMI.

Now, the "major" features. The "massive" performance gains are mostly to Intel's credit, and the CPUs will most likely thermal throttle if we are following Apple's current cooling track record. The 32GB of memory is nothing new to Windows users, as is the "multiple VM support." The True Tone display cannot be calibrated by the user without a tool, unlike machines like the ThinkPad P51, which has an excellent screen for professionals. The "third generation" butterfly keyboard is most likely unimproved, and will still be inferior to any other laptop's keyboard (except maybe the Dell Maglev keyboard).

“The latest generation MacBook Pro is the fastest and most powerful notebook we’ve ever made,” said Philip Schiller, Apple’s senior vice president of Worldwide Marketing. “Now with 8th-generation 6-core processors, up to 32GB of system memory, up to 4TB of super fast SSD storage, new True Tone technology in its Retina display and Touch Bar, the Apple T2 chip for enhanced security and a third-generation quieter keyboard packed into its thin and light aluminum design with all-day battery life, it’s the best notebook for pro users.”

He's on the marketing team. Of course he will fluff up the features of the MacBook. More of the same bull pulled above.

The new MacBook Pro is now faster and more powerful, with 8th-generation 6-core Intel Core processors on the 15-inch MacBook Pro for up to 70 percent faster performance and 8th-generation quad-core Intel Core processors on the 13-inch model for performance that’s up to twice as fast.1 With the option to add up to 32GB of memory on the 15-inch MacBook Pro, users can run more apps simultaneously or load larger files into memory. And with up to a 2TB SSD on the 13-inch model and up to a 4TB SSD on the 15-inch, MacBook Pro gives customers the flexibility to work with large asset libraries and projects wherever they go.

Again the same stuff above.

With 500 nits of brightness and support for the P3 wide color gamut, the Retina display on MacBook Pro is the best Mac notebook display ever. Now with True Tone technology, the display and Touch Bar deliver a more natural viewing experience for design and editing workflows, as well as everyday tasks like browsing the web and writing email.

Apple is being so repetitive. Bet that something like this will be in their marketing language next year. I have given up on the rest of it, which is essentially a school promotion, Mac OS Mojave, and "premium" leather sleeves.

If I feel like it tomorrow, I'll compare it to either the ThinkPad P51 or the Xiaomi Mi Book Pro. Why not.
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Thank you thank you!

Hope you enjoy this disaster!
Today's random thought:

Thoughts on motherboard VRMs and their cooling:
I really don't get what some motherboard makers are thinking with their VRMs. Intel boards (specifically H370/B360) are kinda OK if you have common sense and run a non-K processor on them, but some Z370 boards (especially from MSi) are somewhat underwhelming. The Z370 Gaming Plus seems to have a 3+3 phase VRM: Why? Why on a relatively high-end Z370 board do you have a 3+3 VRM of all things that you could put there? I understand that you need to cost save, but at least try to trim down the rest of the board to actually cool the damned VRMs! The metal slabs that they are using to cool the VRMs appear to be very small and underwhelming. If all else fails, I would just slap a 20mm fan to the heatsinks and call it a day.

AMD boards are even weirder. Check out Actually Hardcore Overclocking's videos on them:

While I do acknowledge that Buildzoid is an EXOC user and will push these boards to their limit, still, the potential for heat on these components combined with an overclock thanks to AMD's all-processor unlocking scheme may result in board failure in the future. From my experience, most PC builders who follow the likes of Linus Tech Tips and JayzTwoCents don't really pay attention to VRMs, and that may be detrimental to the user experience.

So please, MSi, Gigabyte, ASRock (they actually are the least severe "offenders" in this case), and ASUS, give us either decent VRM designs, decent VRM cooling, or both.

Hope this thought was enjoyable to read, and wasn't inflammatory. As always, thanks for tuning in!
 
  • Like
Reactions: Soul_Est

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
Today's thought: (slightly off-topic) Thoughts on my favorite brands :D and brand loyalty

I'm kinda busy packing for my trip, so here's a short bit for your viewing pleasure :)

Favorite power supply brand: SeaSonic: Efficiency, reliability, excellence.
Favorite fan brand: Noctua. For the same reasons above.
Favorite case brand: Fractal Design. I like their design, pricing, and their low blunder count. They don't fail as much as the H500P.
Favorite motherboard brand: It's a tie between Gigabyte and ASRock. ASRock goes crazy all the time, but they do have kinda misleading VRMs and kinda annoying BIOS. I like Gigabyte mostly due to the golden days of Ultra Durable, but now they do have a few (only a few) nice products.

I would list more, but it would get annoying, so here are my thoughts on brand loyalty.
I don't really believe in brand loyalty. I personally just buy what's relatively cheap and known to have good quality. This is the distinction between normal people and brand loyalists. Brand loyalists (Apple fanboys, I'm looking at you) blindly follow a brand for one reason or another, but don't look at what they've done wrong. I used to like Intel given that they had superior performance, but now, thanks to how they treat their customers, I've moved on to liking AMD. I don't love some things about AMD (such as their Vega situation and some other stuff with VRMs), but generally, I enjoy their product.

So yeah, that's it :p The next one of these will be coming from China. :)
 

Choidebu

"Banned"
Aug 16, 2017
1,196
1,204
I'm still wrapping my head around why on earth pc builders nowadays care at all about VRMs? Back in the days it's just whether the board is stable to oc or not. Like, who cares if the mobo employs pixel pixies to regulate voltage, or a phase change transistor?

What next? Trace length? Board layers count?

This is pseudo science on top of pseudo science.
 
  • Like
Reactions: el01

Solo

King of Cable Management
Nov 18, 2017
855
1,422
I'm still wrapping my head around why on earth pc builders nowadays care at all about VRMs? Back in the days it's just whether the board is stable to oc or not. Like, who cares if the mobo employs pixel pixies to regulate voltage, or a phase change transistor?

What next? Trace length? Board layers count?

This is pseudo science on top of pseudo science.

Lmao yeah my build priorities are as follows:

1. Looks good
2. Doesn't have RGB lighting anywhere
3. Works
 

el01

King of Cable Management
Original poster
Jun 4, 2018
770
588
I'm still wrapping my head around why on earth pc builders nowadays care at all about VRMs? Back in the days it's just whether the board is stable to oc or not. Like, who cares if the mobo employs pixel pixies to regulate voltage, or a phase change transistor?

What next? Trace length? Board layers count?

This is pseudo science on top of pseudo science.