• Save 15% on ALL SFF Network merch, until Dec 31st! Use code SFF2024 at checkout. Click here!

Concept SENTRY 3.0: Development and Suggestions

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
I was wondering if it could somehow be integrated into the Sentry's frame, as opposed to a optional piece.
That's the plan, but that will require stamping, and we are planning to do stamped parts.

Then I see how AAA games are developed and how many different hardwares are out there, then I shake my head and come to the conclusion that (true) optimization will never be part of the equation during a game's development. Unfortunately.
Nanite approach is an engine-level / pipeline changing approach that if enforced in the pipeline, will mean games will be able to be optimised automatically with it, of course to a certain degree. I don't know at which point we are with the data bandwidth on Direct Storage implementation and pulling data for Nanite, this may be the initial problem of what kind of quality assets should be in the game, whether the SSD performance will be the bottleneck/limiting factor here OR whether a better/higher-end GPU will mean better performance when it comes to data bandwidth in Direct Storage. This will determine whether the optimisations that are a must have will be working well for every card assuming the SSD bandwidth will be the driving factor or only for the high end cards if the card side is the bottle neck etc. I did not investigate this deep enough yet, but I also think it may be too early for that.
 

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36
That's the plan, but that will require stamping, and we are planning to do stamped parts.

Every bit of information you release on 3.0 makes me more and more willing to upgrade from 2.0 to it! 😁

Nanite approach is an engine-level / pipeline changing approach that if enforced in the pipeline, will mean games will be able to be optimised automatically with it, of course to a certain degree. I don't know at which point we are with the data bandwidth on Direct Storage implementation and pulling data for Nanite, this may be the initial problem of what kind of quality assets should be in the game, whether the SSD performance will be the bottleneck/limiting factor here OR whether a better/higher-end GPU will mean better performance when it comes to data bandwidth in Direct Storage. This will determine whether the optimisations that are a must have will be working well for every card assuming the SSD bandwidth will be the driving factor or only for the high end cards if the card side is the bottle neck etc. I did not investigate this deep enough yet, but I also think it may be too early for that.

As I see it, DirectStorage (and all its equivalent), once it will become a prerequisite, will change how levels will be designed (especially in open worlds) without the needs for corridors, tunnels, elevators or whatnot to allow assets swap in GPU memory on-the-fly, without the need of loading screens. Also, it will allow in any type of game bigger and more diverse assets. If, with the traditional SSD>CPU (decompression)>RAM>CPU (copying)>GPU approach you can do a total GPU memory swap every, let say for example, 30 seconds, meaning that all the needed assets in all directions for the next 30 seconds (at least) have to reside on the GPU memory, with DirectStorage (eventually, SSD>GPU, both decompression and copy to VGA memory) you will do a total assets swap in, let's again say for example, 3 seconds, meaning that you can allocate assets of the same quality for a smaller area (the next 3 seconds in all directions), allowing more diversity on the scene, OR fewer, repeated, but better, assets for the same 3 next seconds. In other words, if we take Insomnic Games' Spider-Man as an example,with DirectStorage you can either have
A: (diversity) the same quality as today's assets but different for each and every skyscraper, because you can keep swapping them on GPU memory while you swing across the city seamessly
or
B: (quality) you can have the same, repeated, amount of skyscrapers as today, but with better, more detailed, assets, again, because you only need to keep a much smaller portion of the city in GPU memory.

Of course, the more memory the GPU has, the better it will still be. What will be possible to achieve with DirectStorage, though, is that today's highest level quality of assets will require lesser GPU RAM, because, unlike today, you'll need to keep less of it readily avaiable.

Nanite can only add to all that but, and I'm being pessimistic here, I don't see developer studios using it to lower GPU requirements capping the quality at what can be done with today's top-end graphic cards, they will simply increase the quality of the next games, driven by the newer, more powerful, graphic cards. And so on, and so on.
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
Nanite can only add to all that but, and I'm being pessimistic here, I don't see developer studios using it to lower GPU requirements capping the quality at what can be done with today's top-end graphic cards, they will simply increase the quality of the next games, driven by the newer, more powerful, graphic cards. And so on, and so on.

I think you're missing the point here. If you have a technology that defines the resolution of rendering of assets based on distance from the camera, whether it's in the middle of the screen as it's the man focus of the player etc. and make it so smoothly between first plane and the far plane, and it "just works" while developer just pushes top quality of assets into the game, we'll start seeing game engines optimising between quality and frame rates smoothly depending on situation between static and combat situation.

If the engine will set the resolution of asset rendering in a way that it assumes a fixed max number of polygons on screen and more objects on the screen means overall lower resolution (lowering more towards the far plane if possible), then the optimisation will end up being something that works out of the box by just setting how much polygons you want rendered on the screen at max. IF that were just that there is to rendering pipeline here, it'd be like calculating how much polygons you can render in a frame on the current GPU to have 60 FPS and set this, and you'd end up with fairly stable FPS. Fairly stable because the transition between quality should be eased out over time, to prevent background flickering between high and low res for example.

This means that if you're walking around the scenery, everything will look beautifully detailed as you are sniffing flowers, while when you're in dynamic action, combat, moving around and jumping, you'll have less resolution on the background. This will happen for the sake of consoles, this must happen at some point.

What I don't know is that if Direct Storage already has some optimisation algorithm that makes it so that you don't need whole, full resolution assets always in the memory or not. If you need all of it, then get rekt lower end cards, I think. If it does though, something similar in spatial optimisation for tracing geometry in game engines like quadtree/octree, but oriented at the way Nanite is going to pull data between high and low res rendering scenarios, then we'll have proper results at lower end cards as well.

We may end up with lower end/less powerful cards being able to render better looking games with the Nanite approach because rendering with LODs switching approach is wasteful - you're rendering far plane objects with more polygons just because you don't want the player to keep seeing them popping between LODs and look out of place compared to other geometry. Also we can't have infinite number of LODs because that's additional work to have more LODs, so in game development it's just few of them, 3, 5 or 7, this kind of size of a number for each object. And with that, you'll still see the pop because if you need to change a geometry from octagon to lets say hexagon/pentagon to reduce the number of vertices, it's visible difference and this is the kind of difference that we're fighting against in games' quality. People are pushing details to max to move the distance at which LODs drop to lower quality to not see those pop, but it means they are putting every type of asset in the same bucket just to get rid of the pop being visible on some of them. So less wasted power on far plane == more performance where it matters.
 
  • Like
Reactions: LeChuck81

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
Feels to me like Epic was actually waiting with UE5 release for Direct Storage:


The change in the industry is happening here. Stuff that we're interested here, that I've talked about few times, is Nanite, Lumen and Megascans - this is what will decide performance requirements in future games.
 

nadeboy

What's an ITX?
New User
Feb 26, 2019
1
0
Just found this thread, and am very interested in a new version. Still rocking my v1.1 #175 from 2017. I even bought a Ghost S1, but that is still boxed and unused (really should try and sell it) as i've still been so happy with the Sentry.

But i am getting an itch to upgrade and love the design I'll certainly look to order a v3.
 

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36

Hi SaperPL, have you read this article? Do you hope it can, eventually, be a breakthrough for the Sentry, and the SFF world in general, especially considering how the GPU world is keeping pushing forward the VGAs' TDP?
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
Hi SaperPL, have you read this article? Do you hope it can, eventually, be a breakthrough for the Sentry, and the SFF world in general, especially considering how the GPU world is keeping pushing forward the VGAs' TDP?

I checked this article link, but since it is locked behind the paywall I checked the summary on science daily that I found. From the summary, I think it's not a revolution for GPUs, but more likely some industrial circuit boards. I think it may be hard for it to be applied to BGA components if the coating is electrically conductive with a dense BGA grid.

In essence, the point of this concept (I extrapolate here based on what I've seen in the summary) is that for a lot of electrical components you have them made in enclosures that don't transfer heat to the top efficiently and then if they are heating up a lot, you end up taking the heat from the furthest point. Think about the "black plastic" type of enclosures for chips like AVR atmega (arduino platform for example).

The awesome efficiency in being able to conduct the heat from the bottom to all around over vertically through plastic'y enclosure comes most likely from the fact that the power draw required grows exponentially with temperature - the higher temperature, the more current you have to push to make sure that the voltage difference between 0 and 1 state is distinguishable. This is why we like our PC chips at low temps, but at the same time we have PC chips locked at specific section of such power draw exponential curve. But in laboratory conditions when picking up a simple chip that can operate in a vastly higher range across this curve, you can show that huge exponential factor.

I think it could make sense for memory chips and maybe power delivery on the GPU, but I don't think it would affect the GPU itself since they have BGA connection and are sliced thin and the enclosure is already directly exposed to the cooler and the amount of heat that such conductive coating can transfer around it would probably be negligible here.
 

RoSenpai

Minimal Tinkerer
New User
Jan 14, 2022
3
0
Hi again!

I was wondering if it would be possible to add, as a choice or something, a front panel with USB C header, since managing a single cable is easier and the connector is sturdier than the USB 3.0 one.
I've found this one as an example, but the connector is Key-B:
That idea, but with Key-A connector would be perfect.
I don't even know if you have any way to even get that kind of cable, I've been looking for it all the day and found nothing.
Maybe Key-B would work for newer motherboards, but I don't know how to check it, since the official documentation doesn't specify the key.
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
Hi again!

I was wondering if it would be possible to add, as a choice or something, a front panel with USB C header, since managing a single cable is easier and the connector is sturdier than the USB 3.0 one.
I've found this one as an example, but the connector is Key-B:
That idea, but with Key-A connector would be perfect.
I don't even know if you have any way to even get that kind of cable, I've been looking for it all the day and found nothing.
Maybe Key-B would work for newer motherboards, but I don't know how to check it, since the official documentation doesn't specify the key.
We want to move to that internal type-C/type-E connector because we want to abandon the usb 3.0 20-pin plug which is terrible. I've checked with our supplier who made the usb headers and they could do these, but at the point we were asking about it, the ribbon style cables for them weren't really an option. We'll push for that once we have other parts of the project figured out. Also it is important to see how popular this new connector would be on itx boards. Ideally we would see two such headers in place of 20-pin which should just be removed and handled by adapter if an old case requires it.
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
Here's another reason why ultra-high-end card is not always ideal choice for SFF system:


And another reason for why we're reluctant to move forward with anything until we see how RTX 40xx actually looks like, because we may end up in a situation where we may have to cut off whole high-end stack of the GPUs if it means only SFX-L PSUs would safely handle that, as well as the fact that some of the cards may end up being 3-slot wide by reference design.
 

Morzone

What's an ITX?
New User
Sep 2, 2021
1
2
Hi,

Frist, I wonder if we could get mounting points designed into this internal frame piece that sits atop Sentry.


Secondly.. Improved support for more storage. I understand that there are some considerations when it comes to 2.5 slot GPUs. Is it possible to have a design for the GPU chamber (when using a non-blower) that accommodates either (2 slot GPU + 2x 2.5" HDD) or (2.5 slot GPU) configuration?

I don't really know any other way to improve this case in a meaningful fashion. I'm personally happy with Sentry 1.1, and it seems like compatibility was increased with the 2.0 version..
 
  • Like
Reactions: sos and SaperPL

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
Frist, I wonder if we could get mounting points designed into this internal frame piece that sits atop Sentry.

We are planning to make the support for this, but in a slightly different way :)

Secondly.. Improved support for more storage. I understand that there are some considerations when it comes to 2.5 slot GPUs. Is it possible to have a design for the GPU chamber (when using a non-blower) that accommodates either (2 slot GPU + 2x 2.5" HDD) or (2.5 slot GPU) configuration?

This is something a bit tricky. We can go two ways:
  1. The GPU supported is thicker than before, up to 2.5 slot. The consequence is once again a custom PCB riser, once again the size of the drives is significantly limited.
  2. Motherboard standoffs are shortened by 1mm, the GPU goes down by 2mm. The consequence is that potential oversize thickness of the card is smaller by 2mm, the distance between the fans and perforation is ~8 instead ~10mm. The CPU cooler support grows by 2mm (we are planning to add 1 mm of thickness to the case) and we can support either 4 x 7 mm SSDs or 2 x 15 mm HDDs. Since motherboard standoffs are 1 mm shorter, so they are 7mm in total and they stand out at only 6mm, they are 0.35mm short of the ATX spec. The board and cooler retention system that is adhering to the ATX specs will still be okay, but it's not technically fully ATX compliant anymore. But also this approach allows using thinner termo pads to connect the bottom SSD with the case.
So it's a bit tricky. Also know that at this point we want it to be a case that will be manufactured in bigger quantities with better tooling, so it's important to think through those decisions properly.

And also we're still waiting to see what will happen with next gen gpu power consumption. If they come out with 420W TDP, then we may just ignore them (high end ones) as they are not making sense for case like this, but if they are like 300W TDP, then it's something that we need to reconsider.
 
Last edited:

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36
So, now that our worst nightmares are real and AD104 is a whopping 285 W GPU, how's this gonna affect Sentry 3.0's design?
Will you consider expanding the case for a 3 slot design? Or will, considering the continous increase of power required by newer gen GPUs, the Sentry target entry level GPUs only?
 
  • Like
Reactions: sos and Octagoncow

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
So, now that our worst nightmares are real and AD104 is a whopping 285 W GPU, how's this gonna affect Sentry 3.0's design?
Will you consider expanding the case for a 3 slot design? Or will, considering the continous increase of power required by newer gen GPUs, the Sentry target entry level GPUs only?

It seems to me like the times of covid when they could sell the GPUs for astronomical prices to select lucky ones has shown nvidia that they can sell cards at such high price, and because of that they decided to double down on performance by selling those wealthy people two GPUs worth of performance and power draw instead of one.

GTX 980/GTX 1080/RTX 2070 are ~180W cards. RTX 4080 12GB, which would be this power segment if they followed up pre-RTX 3000 segmenting at launch, is double that at 366W of max TDP, and you know that it means that we can't tell people to just use the base TDP, those will boost up to the max anyway. Pricing is almost double of RTX 2070 msrp, so in essence there's no RTX 4000 card segment YET that makes sense for such console form factor systems.

Cube/sandwitch layouts or thicker desktop cases may have enough airflow, but there's literally no sense in squeezing 360W GPU in our case. Anything that's above 250W will have to be ignored by us in the design of Sentry 3.0. Anything that cannot run on a 600W SFX PSU will not be supported/taken into consideration. We want to make a feasible (mechanical) Steam Machine ('s original prototype) implementation and not a money sink for people to show off "look how much powah I squeezed in here" and to discard the case after few months because it makes no sense for them will all the noise.
 

Jarvis Babbit

Cable Smoosher
Feb 9, 2019
9
12
I do not understand where you see the problem. The RTX 4070 (yes, I know what I'm writing) after undervolting should be able to perform without any problem in Sentry. The problem will rather be finding a card with sufficiently small cooling. I haven't looked exactly but I think only the Ventus X3 OC with low profile power connectors could fit.
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
478
899
The RTX 4070 (yes, I know what I'm writing) after undervolting should be able to perform without any problem in Sentry.
Two problems with this statement - one is that the support should not be about "go and tinker with the card to ensure that it's not pulling too much power". The components that are stated to be compatible and performing shouldn't be working properly only in certain scenarios. Yes, we know that so far we had a pretty niche audience with limited amount of pricey cases sold, but the point is to make a case that is not forcing people to undervolt and limit performance of components because it's beside the point. Also stability in undervolt may be a hit or miss (that's the problem number two).
 

Apache

Case Bender
New User
Mar 3, 2018
2
4
Two problems with this statement - one is that the support should not be about "go and tinker with the card to ensure that it's not pulling too much power". The components that are stated to be compatible and performing shouldn't be working properly only in certain scenarios. Yes, we know that so far we had a pretty niche audience with limited amount of pricey cases sold, but the point is to make a case that is not forcing people to undervolt and limit performance of components because it's beside the point. Also stability in undervolt may be a hit or miss (that's the problem number two).
While I largely tend to agree with this statement, I would also like to point out that you are already catering to the SFF community, which is in and of itself a niche community full of people that tinker with their builds far more than any other PC building group that I know of.

We aren't ever satisfied until we have cables that are the perfect length for basically every single component, and try to best utilize every available cubic millimeter of volume.

I personally am one of those people that has a Sentry 2.0 with a 12600K (looking to upgrade to a 13700K when Raptor Lake launches) and a 3090 XC3 in my build. Is it overkill? Sure. Is it noisy? No louder than a PS4 Pro, that's for sure. But I wanted a basically no compromises, air cooled build that I can throw in a backpack and take anywhere I want, and I think this case has managed to do that for me.

That's not to say that I disagree with your opinion about designing a case that anyone can just build in vs someone needing a vast amount of experience in SFF builds, but I think the hardcore SFF guys that aren't afraid to tweak system settings and have ducts and such 3D printed are your main audience.

Either way, super excited to see where this goes. Would love to get another Sentry to do another build in. Keep up the good work!
 
  • Like
Reactions: DrHudacris