Concept SENTRY 3.0: Development and Suggestions

Mosskovskaia

Trash Compacter
Mar 15, 2019
53
22
It may seem silly, but Sentry's aesthetics (among other things) play a considerable part in my willingness to spend the amount of money a case like Sentry would set me back, as opposed to some cheaper run-of-the-mill alternative.
There is this Chinese rip-off from taobao from Hzmod the xq69. I wanted to Test one but shipping to Germany was like 2x of the case so a big no.
 

Fuzzylogik

Cable Smoosher
Jan 5, 2020
8
3
Lots of great replies here from eager fans! Personally I am a great fan of the design and construction of the sentry 2. It's small, robust and diverse enough internally for multiple configurations. It's perfect for travel as you can put some pretty decent components inside and know they will be safe inside that steel case. My thoughts for modifications for v3 would be:
1) change position of power button as it does clash with power cables. I know that it will suffer aesthetically if you move it from the center of the front panel but even just a 25-35mm shift upwards to be inline with the center rail would be enough to alleviate cable conflict.
2) the rubber feet are great, but they do dislodge pretty easily - especially if you travel with the sentry. Perhaps make the rubber feet longer (greater surface area) and gluing it to the stand? P.s I do love the aesthetics of the stand.
3) would it be at all possible to have a similar holes at the top of the case, that the stand has at the bottom? It allows flipping the case but also perhaps for more industrious members to design carrying handle that could slot into these extra holes. Or perhaps this could lead to a design of the stand to double as a carrying handle? I'm a big fan of things with multiple utility and creative design - exactly why I like sentry! I'll keep an eye out for future updates!
 
  • Like
Reactions: sos

Jarvis Babbit

Cable Smoosher
Feb 9, 2019
9
12
Hi, I must finally add a few words from myself as I plan to buy version 3.0 as soon as possible. This will be my first SFF case even though I've been interested in this topic for several years. I apologize in advance for any mistakes - I use translator.


I will simply write down my ideas and opinions in points (random order):

1. I beg you not to change the overall dimensions of the case, at least not significantly. In my opinion, its proportions are perfect. I would like the volume to stay below 7 liters
2. In my opinion, you should mainly focus on redesigning the middle reinforcement and creating opportunities to install 2.5 slot GPUs. I think it is definitely worth sacrificing mounting space for SSDs considering how cheap and capacious Nvme drives have become.
3. Gen4 riser is a must even if the price will be higher
4. What do you guys think about moving the center brace even lower to make more room in the GPU area. I know there is a power connector between board and card, but maybe some clever cut in this brace would allow it
5. Since you have dropped support for SFX-L PSUs, why not move the PSU down to allow more room for cable management. That space at the bottom is wasted, the USB cable doesn't need that much, however, this will require a flatter power connector
6. Addition of a mounting space for 40mm fans on the top of the case. This does not seem complicated, but will allow for more possibilities
7. USB ports on the front panel should definitely stay.
8. I don't know what changes you had in mind for the vertical stand, but I think it shouldn't drastically change the look. Now it looks elegant and unique
9. Flow through cooler support - I know a lot of people are asking for this, but I agree with your opinion not to add ventilation to the back of the GPU. The case will then not look as good. Maybe a better idea would be to add a cutout like the one from the initial design of the 2.0 version for the back of the motherboard? And adding two cover plugs to the case - one solid, one perforated
10. Don't go overboard with making the case lighter. It's great that it's so solid. People who write to make it thinner and lighter do not realize that it is mostly the components that affect the weight.

I think that's it for now. If I remember something I will add it 🙂
 

Jarvis Babbit

Cable Smoosher
Feb 9, 2019
9
12
All in all, I'm browsing this way and it came out that potential support for 2.5 slot GPUs doesn't change much. The only 2.5 slot RTX 3080/3090 GPU is the Zotac Trinity but it's too long. On the AMD side, it's not much better. Looks like the industry is moving heavily towards 2.7/2.75/3 slot GPUs for the high end

So to realistically increase compatibility with current and future GPUs Sentry 3.0 would have to support 2.7 slot cards. And to achieve this not only would SSD support have to be dropped but the GPU bracket mount would have to be moved. I'm curious as to how much it could be moved so that it doesn't stick out of the side panel...
 

Jarvis Babbit

Cable Smoosher
Feb 9, 2019
9
12
All in all, I'm browsing this way and it came out that potential support for 2.5 slot GPUs doesn't change much. The only 2.5 slot RTX 3080/3090 GPU is the Zotac Trinity but it's too long. On the AMD side, it's not much better. Looks like the industry is moving heavily towards 2.7/2.75/3 slot GPUs for the high end

So to realistically increase compatibility with current and future GPUs Sentry 3.0 would have to support 2.7 slot cards. And to achieve this not only would SSD support have to be dropped but the GPU bracket mount would have to be moved. I'm curious as to how much it could be moved so that it doesn't stick out of the side panel...
Okay, I'm a little dumb. I didn't notice that the side panel was bent in the back. Wanting to move the GPU bracket mount you would have to make a similar notch in that bant like for the power plug.
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
472
883
All in all, I'm browsing this way and it came out that potential support for 2.5 slot GPUs doesn't change much. The only 2.5 slot RTX 3080/3090 GPU is the Zotac Trinity but it's too long. On the AMD side, it's not much better. Looks like the industry is moving heavily towards 2.7/2.75/3 slot GPUs for the high end

So to realistically increase compatibility with current and future GPUs Sentry 3.0 would have to support 2.7 slot cards. And to achieve this not only would SSD support have to be dropped but the GPU bracket mount would have to be moved. I'm curious as to how much it could be moved so that it doesn't stick out of the side panel...

I don't think it does make sense to support 2.75/3.0 slot GPUs to be honest. If you want this kind of card, buy a bigger case. If we were to support so big cards, the shape of the case and volume would change significantly. The issue with those huge cards is not that the industry has moved towards them because it makes sense, but because of the price hikes/gauging the manufacturers streamlined the cooling towards more expensive options.

Our idea to support 2.5 slot GPUs was because it wasn't clear whether it's safe to put a 2.5 slot card in that's almost touching the perforation with fan blades and we wanted to make it safeguarded if only by two mm maybe, and to be not that loud because how close it is to the perforation, but that's it, and that's not necessarily going to happen as there are some other factors that we need to take into account.

Most of you are expecting the case to support ultra-high-end GPUs because of the price point it had in first two manufacturing runs, but it's not how it should work. It doesn't really make sense to pair a 350W TDP GPU with a case like this and it's a waste of money unless you really have some kind of workload that actually benefits from the increased SIMD core count or memory amount while not really stressing the clocks.
 

Jarvis Babbit

Cable Smoosher
Feb 9, 2019
9
12
I don't think it does make sense to support 2.75/3.0 slot GPUs to be honest. If you want this kind of card, buy a bigger case. If we were to support so big cards, the shape of the case and volume would change significantly. The issue with those huge cards is not that the industry has moved towards them because it makes sense, but because of the price hikes/gauging the manufacturers streamlined the cooling towards more expensive options.

Our idea to support 2.5 slot GPUs was because it wasn't clear whether it's safe to put a 2.5 slot card in that's almost touching the perforation with fan blades and we wanted to make it safeguarded if only by two mm maybe, and to be not that loud because how close it is to the perforation, but that's it, and that's not necessarily going to happen as there are some other factors that we need to take into account.

Most of you are expecting the case to support ultra-high-end GPUs because of the price point it had in first two manufacturing runs, but it's not how it should work. It doesn't really make sense to pair a 350W TDP GPU with a case like this and it's a waste of money unless you really have some kind of workload that actually benefits from the increased SIMD core count or memory amount while not really stressing the clocks.
I think I upset you unnecessarily. I'm sorry. By writing this I didn't mean to force the direction of Sentry 3.0 development on you. These were just my thoughts that I wanted to share for some reason.

And as for the third paragraph. I have to disagree. Even if you think that putting the top GPUs in there doesn't make sense it won't change the fact that people will continue to do it. Because they like to do it. Not only putting the most powerful hardware out there but also optimizing it and fighting for every degree less temperature, every MHz more is just fun for enthusiasts 🙂
 
  • Like
Reactions: Talyrius and sos

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
472
883
I think I upset you unnecessarily. I'm sorry. By writing this I didn't mean to force the direction of Sentry 3.0 development on you. These were just my thoughts that I wanted to share for some reason.
You didn't upset me. You were exactly right about that it doesn't do that much if the top tier cards are growing in bulk. But we can't expect to handle ultra-high-end cards in this type of case just because so far it was expensive to manufacture and in effect pricey.

And as for the third paragraph. I have to disagree. Even if you think that putting the top GPUs in there doesn't make sense it won't change the fact that people will continue to do it. Because they like to do it. Not only putting the most powerful hardware out there but also optimizing it and fighting for every degree less temperature, every MHz more is just fun for enthusiasts 🙂

They will continue, but it doesn't mean it makes sense and it doesn't mean it will be majority of the users. You see, so far most of the cases sold were through crowdfunding meaning people who could afford to spend money on expensive case and also wait for it, so it means putting that amount of money aside in other's hands. So it means those were people who could afford a "luxurious item" (because they don't need it right at the moment) that will be delivered to them at some point later. Once the case is off shelf product, there will be more people buying it not because they are enthusiasts, but because it solves the problem for them, or even just for the looks or ergonomy on the desk.

BTW, let's see what happens if nvidia hopper scales up to 700W TDP on ultra high end...
 
  • Like
Reactions: garou81

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36
They will continue, but it doesn't mean it makes sense and it doesn't mean it will be majority of the users. You see, so far most of the cases sold were through crowdfunding meaning people who could afford to spend money on expensive case and also wait for it, so it means putting that amount of money aside in other's hands. So it means those were people who could afford a "luxurious item" (because they don't need it right at the moment) that will be delivered to them at some point later. Once the case is off shelf product, there will be more people buying it not because they are enthusiasts, but because it solves the problem for them, or even just for the looks or ergonomy on the desk.

BTW, let's see what happens if nvidia hopper scales up to 700W TDP on ultra high end...

Going on, if rumors are to be believed, next gen Nvidia GPUs will be even more power hungry than 3000 series. Up to the point that a middle tier graphic card (let's say, a 4060/4060 Ti) will probably have up to a 250W~300W TDP, given that current 3060/3060 Ti already have a TDP of 170W/200W, respectively, meaning that those will already be outside the suggested 150W maximum TDP. Limiting (even more) what VGAs could be fitted in the Sentry 3.0 without a risk of thermal throttling. Unfortunately, it seems the industry isn't caring at all about VGAs TBP (Total/Typical Board Power) - look also to the fact that these generation of GPUs, both from AMD and Nvidia, don't contemplate any 75W (aka, PCIe only powered) GPU - so, I think, that's something to consider in designing the 3.0 version.
I don't really know, I love the fact my Sentry 2.0 has a total volume of 7L, but I also foresee how much limiting its design will be going on, up to the point that, when my 3070 Ti will be obsolete, if the trends will keep going on as of now, it can only be replaced by an entry-level Nvidia x50/AMD x400/x500 VGA, partly negating the, what I think is, original scope of the Sentry, offering a (relatively) powerful PC in as a compact case as possible, competing with consoles in term of volumes.

My two cents.
 
  • Like
Reactions: Talyrius

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
472
883
Well, that's exactly why we are not pushing hard forward with 3.0 and waiting for what is going to happen. Because we don't know what nvidia will do in the mid-range up to before ultra-high-end (the xx70TI/xx80 cards).

It seems as if nvidia achieved 50% performance increase in the architecture and they decided to double down by making the chip twice as big OR 100% performance increase in the architecture and 50% bigger chip so they could boast 3x performance boost over previous generation. It's like they anticipate or plan on having the pricing so steep that they need to back it up with the performance increase being a huge "once in a lifetime" generational gap. If that's the case, then we may expect further availability issues.

So the problem is - should we completely change what the product is because the industry is pushing towards more power hungry cards?

EDIT: no, it's not going to be 700W. 700W is the dual-die/fused-die dual hopper chip for datacenter, the pci-e packaged one is still 350W.
 
Last edited:

riba2233

Shrink Ray Wielder
SFF Time
Jan 2, 2019
1,645
2,123
www.sfftime.com
Hopper h100 is 814mm2 single die, 700w tdp but only for server boards yeah, pcie versions will be lower ofc. Some curent aib gpus already have up to 450w tdp range, but yeah we have to see how the trends will continue
 

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36
Well, that's exactly why we are not pushing hard forward with 3.0 and waiting for what is going to happen. Because we don't know what nvidia will do in the mid-range up to before ultra-high-end (the xx70TI/xx80 cards).

It seems as if nvidia achieved 50% performance increase in the architecture and they decided to double down by making the chip twice as big OR 100% performance increase in the architecture and 50% bigger chip so they could boast 3x performance boost over previous generation. It's like they anticipate or plan on having the pricing so steep that they need to back it up with the performance increase being a huge "once in a lifetime" generational gap. If that's the case, then we may expect further availability issues.

So the problem is - should we completely change what the product is because the industry is pushing towards more power hungry cards?

EDIT: no, it's not going to be 700W. 700W is the dual-die/fused-die dual hopper chip for datacenter, the pci-e packaged one is still 350W.

Yep, given current rumors, I can totally see why, and agree to, you would want to play the "wait&see" game.

Once things are set and AiBs disclosed their Nvidia's 4000 series' designs, I'd suggest the Sentry 3.0 should aim to at least Nvidia's x70 cards, up to the point of bumping up a little bit the total volume and/or toying around with internal space for the VGAs, trying to accomodate bigger ones. I know some would argue with my point but I think Nvidia's x70 line of GPUs is the sweet spot for what the Sentry aims to be, a contender to consoles in people's living rooms, in plain view next to the house's big screen.

Btw, are you investigating if it's even possible simplifying the design (like, less bendings, thinning the structure, more screws) to cut some manufacturing costs and applying some of those savings to the retail price? Is something like this even possible without sacrificing the structural strength too much?

(Of course, this is all a "pour parler" post, I know I'm not entitled to any decision on the Sentry 3.0 final design, it's your product and it's up to you to decide what you will want to do with it, I'm only throwing ideas here and have a nice talk about it with you guys, it's not every day we have the chance to talk with producers while a product is actually in its designing process 😉)
 
Last edited:
  • Like
Reactions: sos and SaperPL

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
472
883
Btw, are you investigating if it's even possible simplifying the design (like, less bendings, thinning the structure, more screws) to cut some manufacturing costs and applying some of those savings to the retail price? Is something like this even possible without sacrificing the structural strength too much?
Yes, we're planning exactly that. Both for manufacturing costs/reducing risks and for the maintenance ergonomic.
 
  • Like
Reactions: LeChuck81 and sos

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
472
883
So, 3090 Tis are out, are you gonna plan to make room for a FOUR SLOT (actually, EVGA FTW3 is only 3,75 slots) 500W VGA in 3.0?

We can make a design for a 3D printed bracket that will let you attach a bucket or some wooden crate to handle it on the side. Is that okay with you? :D

In all seriousness though, going above 2.5 slot width doesn't make sense for a case like this, we don't have induced airflow in GPU compartment that would be enough for such cards. We can joke about it, but in the end there will be mid range and lower and GPUs that we'll be interested in supporting.

Also let's wait and see for UE5 games supporting direct storage/infinite detail or whatever it's called (don't remember it from the top of my head).
 

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36
We can make a design for a 3D printed bracket that will let you attach a bucket or some wooden crate to handle it on the side. Is that okay with you? :D

I would actually be inclined to a solution like that! 🤣

In all seriousness though, going above 2.5 slot width doesn't make sense for a case like this, we don't have induced airflow in GPU compartment that would be enough for such cards. We can joke about it, but in the end there will be mid range and lower and GPUs that we'll be interested in supporting.

That bracket supporting 6x 40mm fans is a good start for airflow induction, imo. If it could somehow be integrated in 3.0 design, it would be a great addiction.
Other than that, unfortunately, at least on the Nvidia side, we haven't seen a perf/watt progress in ages, all they kept doing was bumping up the power usage. Let's hope AMD (being the manufacturer of choice for console makers) will invest more on this department. Ryzen is a good example of this, being less energy-intensive than Intel-equivalent (unlocked i5 are 125 W CPUs while R5 are 65W CPUs). Same goes for GPU, with RDNA2 being more conservative on the energy side than Nvidia equivalent. Than again, Nvidia GPUs have much more RT and IA dedicated hardware per core than AMD which will probably explain the difference in power consumption. I expect that RDNA3, if it even gets closer to Nvidia's solutions in terms of RT and IA, will be more energy intensive. 😥

Also let's wait and see for UE5 games supporting direct storage/infinite detail or whatever it's called (don't remember it from the top of my head).

Yep, it's "DirectStorage", at least in DirectX, but I don't see how that could affect GPUs' power consumptions. It can enhance graphics, once it will be an integrated, prerequisite, part of game development (and not an option you can toggle on and off), but I don't see hardware requisites going down for that, only graphics enhancements.
 

SaperPL

Master of Cramming
DR ZĄBER
Oct 17, 2017
472
883
That bracket supporting 6x 40mm fans is a good start for airflow induction, imo. If it could somehow be integrated in 3.0 design, it would be a great addiction.
We don't want to make a separate piece that not everyone will use, but everyone would pay for and also a part that may be lost/displaced.
Ryzen is a good example of this, being less energy-intensive than Intel-equivalent (unlocked i5 are 125 W CPUs while R5 are 65W CPUs)
I would be moderate in such statements as stock 65W TDP Ryzen CPUs have 88W power limit. Similarly I remember that R9 Nano which had TDP of 175W could pull 450W momentarily, but was designed in a way there performance and efficiency stages in pipeline that somehow were supposed to even up to that base TDP.

Right now we're in a situation where TDP or whatever it's called right now by vendors is not a metric of how much heat emission the cooler needs to handle, but the averaged amount of power consumed, statistically for the sake of environmental restrictions, I think. Anyway this causes that we have an intel CPU that will have 95W "TDP" but can pull 150 or lets say 195W when it needs performance while the average in an hour or in 24 hours of use will fit within those 95W. But while you're in that performance stage pulling maximum power, if it's long enough, it means going with a cooler that matching the TDP exactly may not be enough. 5600X should be a good example of this, where putting a good cooler on this chip will give you slightly more CPU performance. I believe if you similar made a comparison on "95W TDP" intel i7/i9 chip and compared copper-core intel box cooler against something that is rated for 200W, you'd see a significant difference

Yep, it's "DirectStorage", at least in DirectX, but I don't see how that could affect GPUs' power consumptions. It can enhance graphics, once it will be an integrated, prerequisite, part of game development (and not an option you can toggle on and off), but I don't see hardware requisites going down for that, only graphics enhancements.

Direct storage here was also only a part of equation, the second part was Nanite virtualized micropolygon geometry. You can't see the consequences of this yet, but it will come. If you can control exactly how many polygons where you want to render live in each frame, it happens without objects popping in and out in the scene between their LODs (Level of Detail configurations), this means two things: you're not wasting any performance if you want it to look good because performance goes where you need it and more importantly the game can control this dynamically based on how fast your GPU can cope with it.

The effect of this happening in games industry will be either that there are going to be games that needs a lot more power and we will need a generational leap, and after that it'll become stagnant, but the scenario that I'm more inclined to believe is that a lot of games using it will be able to look properly on existing and mid range cards because now they are not going to waste performance that much to look good in the close range without things popping between LODs.

The question of the first possible outcome is RTX vs Unreal's Lumen - if hardware accelerated ray tracing through dedicated units wins, we'll end up in need of expensive/power hungry cards to run games at top quality even with Nanite-like approach.
 

LeChuck81

SFF Lingo Aficionado
May 6, 2019
129
36
We don't want to make a separate piece that not everyone will use, but everyone would pay for and also a part that may be lost/displaced.

I was wondering if it could somehow be integrated into the Sentry's frame, as opposed to a optional piece.

I would be moderate in such statements as stock 65W TDP Ryzen CPUs have 88W power limit. Similarly I remember that R9 Nano which had TDP of 175W could pull 450W momentarily, but was designed in a way there performance and efficiency stages in pipeline that somehow were supposed to even up to that base TDP.

Right now we're in a situation where TDP or whatever it's called right now by vendors is not a metric of how much heat emission the cooler needs to handle, but the averaged amount of power consumed, statistically for the sake of environmental restrictions, I think. Anyway this causes that we have an intel CPU that will have 95W "TDP" but can pull 150 or lets say 195W when it needs performance while the average in an hour or in 24 hours of use will fit within those 95W. But while you're in that performance stage pulling maximum power, if it's long enough, it means going with a cooler that matching the TDP exactly may not be enough. 5600X should be a good example of this, where putting a good cooler on this chip will give you slightly more CPU performance. I believe if you similar made a comparison on "95W TDP" intel i7/i9 chip and compared copper-core intel box cooler against something that is rated for 200W, you'd see a significant difference

Well, yeah, indeed CPUs( and GPUs)'s TDPs are more of an advice than a real metric, that's for sure. Still the 88W power limit of an 5600X is lower than the average 12600K's 125W TDP. That was the angle from which I'd say the whole argument should be approached. Same goes for VGAs. The TDP/TBP/TGP/whatever are not an exact metric of how much a card will consume (I saw GamerNexus' review of the EVGA 3090 TI FTW3 Ultra, for example, and, while being a declared 450W TDP card, it easily goes up to 500W, even 530W if OC'ed), but it can give a rough idea of which VGA will draw less power than others.

Direct storage here was also only a part of equation, the second part was Nanite virtualized micropolygon geometry. You can't see the consequences of this yet, but it will come. If you can control exactly how many polygons where you want to render live in each frame, it happens without objects popping in and out in the scene between their LODs (Level of Detail configurations), this means two things: you're not wasting any performance if you want it to look good because performance goes where you need it and more importantly the game can control this dynamically based on how fast your GPU can cope with it.

The effect of this happening in games industry will be either that there are going to be games that needs a lot more power and we will need a generational leap, and after that it'll become stagnant, but the scenario that I'm more inclined to believe is that a lot of games using it will be able to look properly on existing and mid range cards because now they are not going to waste performance that much to look good in the close range without things popping between LODs.

The question of the first possible outcome is RTX vs Unreal's Lumen - if hardware accelerated ray tracing through dedicated units wins, we'll end up in need of expensive/power hungry cards to run games at top quality even with Nanite-like approach.

I see your point, and I say it's a fascinating point of view on the matter. Then I see how AAA games are developed and how many different hardwares are out there, then I shake my head and come to the conclusion that (true) optimization will never be part of the equation during a game's development. Unfortunately.