Sata... Switch?

ChainedHope

Airflow Optimizer
Original poster
Jun 5, 2016
306
459
A random idea popped into my head today and after practicing Google Fu, I found nothing. Does anyone know of a device that when a switch is flipped, a connected sata device would be switched from 1 sata connection to another? Pretty much a sata demultiplexer? If not I'll have to play with just making my own. It's for a silly idea that I just want to see lol.
 

chx

Master of Cramming
May 18, 2016
547
281
I do not get this, SATA is fundamentally an internal connection and ... are you planing to somehow switch from one machine to another? Why not USB then? Or network the ... out of your machines -- take this item from eBay and add this to the other, even Windows 10 supports NIC Teaming so that's 20gbit/s, very far above what SATA can do, use network sharing and be happy...?
 
Last edited:

Kandirma

Trash Compacter
Sep 13, 2017
54
40
I'm not entirely sure what your goal is...would just a simple external drive dock with multiple bays work? I can't think of anything that is more like a 'switch'.

The 4 bay one here does have power switches to turn on/off instead of just docking/undocking, but it's not a 'switch' between one and another.

Something like that + a KVM device...

In general though this sounds like a situation for just a NAS.
 

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
Are we looking for creative new ways of triggering a BSOD?

All jokes aside, I doubt anything like this exists. Is this for some sort of "2 systems in one case" setup?
 
  • Like
Reactions: ChainedHope

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
I think the OP is looking for the opposite of that - that's for 4 devices and one host, the OP wants two hosts and one device.

Something like what the OP describes should be possible to make (some quick searches gave me hits for SATA switching microcontrollers available for cheap), though you'd need one that has I/O pins for a button to switch between devices, and you'd likely need to design a custom PCB (and possibly program the microcontroller yourself), neither of which would be easy.
 

ChainedHope

Airflow Optimizer
Original poster
Jun 5, 2016
306
459
It was just an idea. The more I think about it the worse it sounds, but also pretty cool from a "why not" stand point.

Basically a way to have a high end system work as a travel PC or as a high-performance server. When the high end system is in "travel mode", all of the devices are connected to a low power system that only runs the essentials (Nas and home automation services) while leaving the main system with a M.2 ssd and a single hdd. When in server mode, the low power system is gracefully shutdown, the os and mass storage are then swapped to the main system and it is powered on. Basically giving me the power to run my more demanding tasks (encoding, virtualization) on a better server while also giving me the option to split my systems into a low power essentials server and a high power travel PC.

It's a pretty silly idea, but it sounded neat at the time. Even went as far as sketching out case ideas with a physical connection bus to pass all of the I/o through and a contact switch that when the two case halfs were connected would send a control signal to do the shutdown and the device switches automatically.

(I do have the know how to create this sort of device, but time is a limiting factor with my current work and this would definitely be a long project if I had to build it myself)
 
  • Like
Reactions: Valantar

fboost

Efficiency Noob
May 15, 2018
7
1
SuperMicro has (or I should say had) a 2.5" hot-swap storage kit, that has 2 SAS SFF-8484 ports. These can be use to connect cascading or in a mulitpath configuration as a failsafe. Maybe it could be used in your scenario? I have no idea, but a SAS backplane or SAS expander *might* be able to do what you want?
 

Kandirma

Trash Compacter
Sep 13, 2017
54
40
So, would you have 2 full mobos, gpu, ram, etc in the case?

after your description I'm even more confused.

Why not just dual-boot dedicate your m.2 to have a partition for each boot sector, with in-case storage mounted as appropriate then get a mobo with dual bios that you can setup with disabled PCIe lane and under-volted chipset so you're not pumping un-needed power into components?
 

ChainedHope

Airflow Optimizer
Original poster
Jun 5, 2016
306
459
after your description I'm even more confused.

Why not just dual-boot dedicate your m.2 to have a partition for each boot sector, with in-case storage mounted as appropriate then get a mobo with dual bios...

The basic idea is a hybrid system. A dock that has a low power, low cost PC running services, but when the main system is docked, the low power system is turned off and resources are routed to the main system. I use unRaid for NAS, docker, and virtualization. And with some tweaking, it could support this bizarre setup.

Using a dual bios setup doesn't work as I'd still need to bring all of the components when I travel. Maybe it would help if I explained the layout.

The dock: a low-power mATX (think anthalon) system running with 8GB of ram connected to 6x 10TB hard drives. This is the bare NAS that will be running when the main system is disconnected. Vague sketches indicate a 22L size.

The Main system: a mITX AM4 with gpu, 32+gb memory, 1 m.2 ssd. Vague sketches showing 10-14L.

If i made 1 system I'm looking at over 30L. The docking approach let's me have a more portable system at 14L, while leaving my NAS at home where it is being used, but the ability to drop in the main system to be able to run more demanding tasks using the same storage and unRaid OS (1 time configuration). An example is running multiple virtual machines, video encoding, and video streams at the same time which happens more than you would think.

Of course I could build two systems and separate them, but this idea sounded fun when I was thinking about it and I've never seen anything like it (probably for good reason).
 

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
The basic idea is a hybrid system. A dock that has a low power, low cost PC running services, but when the main system is docked, the low power system is turned off and resources are routed to the main system. I use unRaid for NAS, docker, and virtualization. And with some tweaking, it could support this bizarre setup.

Using a dual bios setup doesn't work as I'd still need to bring all of the components when I travel. Maybe it would help if I explained the layout.

The dock: a low-power mATX (think anthalon) system running with 8GB of ram connected to 6x 10TB hard drives. This is the bare NAS that will be running when the main system is disconnected. Vague sketches indicate a 22L size.

The Main system: a mITX AM4 with gpu, 32+gb memory, 1 m.2 ssd. Vague sketches showing 10-14L.

If i made 1 system I'm looking at over 30L. The docking approach let's me have a more portable system at 14L, while leaving my NAS at home where it is being used, but the ability to drop in the main system to be able to run more demanding tasks using the same storage and unRaid OS (1 time configuration). An example is running multiple virtual machines, video encoding, and video streams at the same time which happens more than you would think.

Of course I could build two systems and separate them, but this idea sounded fun when I was thinking about it and I've never seen anything like it (probably for good reason).
Wait, so this is going in two separate cases, but ones that are made to interlock and connect? How are you going to make a connector that allows for connecting the main system without also opening it and plugging in a wackton of cables from the second case? Are you planning on making your own docking connector? If I assume that the main PC has its own power supply, the docking cable would "only" need to carry a bunch of SATA signals and power for your HDDs, which I guess would be doable, but ... very impractical. 6 3.5" HDDs, accounting for spin-up power draw, that's 90W or so, or ~7-8A. Doable, but needs a thick wire. Not to mention that any external connector hooked up to the HDDs' power plugs would be live at 12V outside of the case whenever the NAS was running.


IMO, you'd be better off getting 10GbE cards for both systems, using a bifurcation riser for the main system and one of the spare PCIe slots in the NAS. It won't be as fast as local storage, but should be plenty quick, and far easier. Sure, the NAS would be powered at the same time as the main PC, but that would consume ... 20-30W more? Also, how would you get Windows to recognize HDDs formatted for unRAID?
 

Mortis Angelus

Airflow Optimizer
Jun 22, 2017
283
277
It was just an idea. The more I think about it the worse it sounds, but also pretty cool from a "why not" stand point.

Basically a way to have a high end system work as a travel PC or as a high-performance server. When the high end system is in "travel mode", all of the devices are connected to a low power system that only runs the essentials (Nas and home automation services) while leaving the main system with a M.2 ssd and a single hdd. When in server mode, the low power system is gracefully shutdown, the os and mass storage are then swapped to the main system and it is powered on. Basically giving me the power to run my more demanding tasks (encoding, virtualization) on a better server while also giving me the option to split my systems into a low power essentials server and a high power travel PC.

It's a pretty silly idea, but it sounded neat at the time. Even went as far as sketching out case ideas with a physical connection bus to pass all of the I/o through and a contact switch that when the two case halfs were connected would send a control signal to do the shutdown and the device switches automatically.

(I do have the know how to create this sort of device, but time is a limiting factor with my current work and this would definitely be a long project if I had to build it myself)


Please note, that this is input/reflection from a person with absolutely no programming experience:

As I understand it, computers are not very fond of removing or plugging in SATA-devices on the go. You usually have to reboot the PC whenever you change something with SATA-interface. However, there are also fully hot-swappable drive bays, so my quess is that those have some chip that makes the SATA interface think there is always something connected, and that the drive essentially works as an external drive. I would thus imagine, you could construct some kind of interface with a dual-chip design making two computers think you have a SATA plugged into both, and then make a switch that targets which PC the data stream from the drive should go to.

Question is, why would you ever need this, instead of using e.g. ethernet connections between the computers, which @Valantar and @chx suggest?

But I digress, these were just reflections from a non-engineer point of view.
 

Valantar

Shrink Ray Wielder
Jan 20, 2018
2,201
2,225
Please note, that this is input/reflection from a person with absolutely no programming experience:

As I understand it, computers are not very fond of removing or plugging in SATA-devices on the go. You usually have to reboot the PC whenever you change something with SATA-interface. However, there are also fully hot-swappable drive bays, so my quess is that those have some chip that makes the SATA interface think there is always something connected, and that the drive essentially works as an external drive. I would thus imagine, you could construct some kind of interface with a dual-chip design making two computers think you have a SATA plugged into both, and then make a switch that targets which PC the data stream from the drive should go to.

Question is, why would you ever need this, instead of using e.g. ethernet connections between the computers, which @Valantar and @chx suggest?

But I digress, these were just reflections from a non-engineer point of view.
Actually that depends on the SATA controller (AFAIK). I've hot-swapped quite a few SATA drives throughout the years, never using hardware specifically made for the task - but I've also tried it in systems where it didn't work. Some BIOSes have settings to allow or disallow SATA hot-swap, but I've seen it work in systems lacking this setting too. YMMV.
 
  • Like
Reactions: Mortis Angelus

chx

Master of Cramming
May 18, 2016
547
281
SATA hot swap is part of the standard. There are two separate things involved, hot plug and hot removal and both are defined in the standard. In fact, the very length of the power connector pins are defined to make hot plug a possibility, first pin 4 and 12 both ground pins connect then second step, pin 5, 6 and 10 (ground) and pin 7 (5V) and 13 (12V) connect to allow pre charging and only then will 8, 9 and 14 and 15 connect at the end to allow full power draw. It's crafty.

Now, whether the Windows drivers support this is a whole another matter. I'd strongly recommend using http://mt-naka.com/hotswap/index_enu.htm to hot remove and install Devcon to be able to simply run devcon /rescan after hot plug. Both can be done with different utlities but these two have the wonderful property of actually working :D

For Linux, eh, read https://unix.stackexchange.com/q/43413/9452 and https://serverfault.com/q/5336/64874