Other m.2 riser card or cable.

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8
Hey guys, so I need the SFF ingenuity help :).

I am working on a new PFsense build, and my board has won the dumb award. They have an SSD slot, that is in a horrid location. It will hold a 2242, but they say "The slot on the edge of the board allows for longer drives".

Why they say that, or do that on a server board designed for a 1u case, where that doesn't even close to work is beyond me.


So what I need to do is flip the slot, with some kind of card, or cable that I can make a housing with.
Here is a pic of the slot,


So I want to go up slightly, and backwards, over the Sata/Mini SAS connectors. So that I can actually fit a 2280 drive. I need the riser/card to be a PCI version, not MSata.

Any ideas? Anyone done something similar?
 

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8

Yep exactly like those, but no order link? Scares me lol.



Thats it, not for 40 dollars shipped from china though, that guy has got to be crazzy. Also states it supports 8gbps?? Going to need to support 32 lol.

If it was a perfect solution, hard card setup, then maybe id pay 40, for that ha no way.


Honestly dont even need that much PCB or cable, Id be happy with just a connector flipped the other direction then the one it plugs into. That would be the best solution.

Well one that could, be mounted to the 2242 and then go the other direction with the top side m.2 connector, and a pcb to hold it would be the most ideal lol.
 
Last edited:

Thestarkiller32

Cable-Tie Ninja
Aug 13, 2017
152
102
Yep exactly like those, but no order link? Scares me lol.




Thats it, not for 40 dollars shipped from china though, that guy has got to be crazzy. Also states it supports 8gbps?? Going to need to support 32 lol.

If it was a perfect solution, hard card setup, then maybe id pay 40, for that ha no way.


Honestly dont even need that much PCB or cable, Id be happy with just a connector flipped the other direction then the one it plugs into. That would be the best solution.

Well one that could, be mounted to the 2242 and then go the other direction with the top side m.2 connector, and a pcb to hold it would be the most ideal lol.

In the description, he said that he can vary the length and the style of the connector(Hand made).... and it was only an idea ;)

Have you considered a u.2 drive instead?
 
Last edited:

Bense

Minimal Tinkerer
New User
Jun 28, 2017
3
0
Not sure if it helps, but I linked a bunch of wild m.2 'extenders' a while back in my build thread. You might find a cleaner, more elegant solution.

Yikes, those won't work. You'd likely need to find one with the U.2 connector oriented in this direction, but on a shorter card. :S

https://www.gigabyte.com/Motherboard/GC-M2-U2-MiniSAS#ov
 
Last edited:

Thestarkiller32

Cable-Tie Ninja
Aug 13, 2017
152
102
Last edited:

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8
In the description, he said that he can vary the length and the style of the connector(Hand made).... and it was only an idea ;)

Have you considered a u.2 drive instead?

Ya I seen that, but not even an example of price I'm scared lol :p.

I don't want a U.2, because I do not want the sata cable, nor have room on the case for a 2.5 or 3.5 drive, I need that space for other things, tiny 1u case :p.

If need be, I would rather go with a msata over using a U.2.
 

Thestarkiller32

Cable-Tie Ninja
Aug 13, 2017
152
102
Ya I seen that, but not even an example of price I'm scared lol :p.

I don't want a U.2, because I do not want the sata cable, nor have room on the case for a 2.5 or 3.5 drive, I need that space for other things, tiny 1u case :p.

If need be, I would rather go with a msata over using a U.2.


512 GB
Read (max.) 560 MB/s
Write (max.) 460 MB/s

M.2 2242

SATA 6G
 

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8


512 GB
Read (max.) 560 MB/s
Write (max.) 460 MB/s

M.2 2242

SATA 6G

Got no clue what that is? Your pic is missing lol.

I know that m.2 2242s exist, my largest concern is their quality. That is why I wanted to go with industrial grade, this drive is going to be used in my business firewall, with lots of logging from 200+ clients, and I need it to not fail, with tons of reads and writes.

It's going to be caching for employee lan, and logging traffic for guest lan, this drive is going to be out through serious torture. So a off brand MLC nand drive, with crap reliablity ain't going to cut it. I don't really need the speed of NVME, I need the reliablity that comes with the TLC nand and Enterprise drive features.

I will be working on implementing some kind of nightly drive dump, to backup everything, however I don't want a drive that I know is going to fail quickly. All the 2242s I find die in a few months in tablets and chromebooks, they will be garbage in this server enivorment.
 
Last edited:

Thestarkiller32

Cable-Tie Ninja
Aug 13, 2017
152
102
Got no clue what that is? Your pic is missing lol.

I know that m.2 2242s exist, my largest concern is their quality. That is why I wanted to go with industrial grade, this drive is going to be used in my business firewall, with lots of logging from 200+ clients, and I need it to not fail, with tons of reads and writes.

It's going to be caching for employee lan, and logging traffic for guest lan, this drive is going to be out through serious torture. So a off brand MLC nand drive, with crap reliablity ain't going to cut it. I don't really need the speed of NVME, I need the reliablity that comes with the TLC nand and Enterprise drive features.

I will be working on implementing some kind of nightly drive dump, to backup everything, however I don't want a drive that I know is going to fail quickly. All the 2242s I find die in a few months in tablets and chromebooks, they will be garbage in this server enivorment.

You know that SSDs are not ment for that kind of workloads, they are ment for databases with access only... if you lock datatrafik and user-informations 24/7, you sould go with a HDD-RAID like RAID 5/10 and if nassesary in a external NAS over 1/10Gbts

"...I need the reliablity that comes with the TLC nand and Enterprise drive features."

TLC is the worst kind of Nand, it has multible data bits in one gate, that means it has, if it wants to wirte to the Bit to overwrite all bits in the transistor instadt of one individual cell in eatch transistor witch obviously burns the number of write cycles down.

NAND-types:
SLC = Single layer cell = 1bit per transistor
MLC= Multi layer cell = 2bit per transistor
TLC = Tripple layer cell = 3bit per transistor
 
Last edited:

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8
You know that SSDs are not ment for that kind of workloads, they are ment for databases with access only... if you lock datatrafik and user-informations 24/7, you sould go with a HDD-RAID like RAID 5/10 and if nassesary in a external NAS over 1/10Gbts

"...I need the reliablity that comes with the TLC nand and Enterprise drive features."

TLC is the worst kind of Nand, it has multible data bits in one gate, that means it has, if it wants to wirte to the Bit to overwrite all bits in the transistor instadt of one individual cell in eatch transistor witch obviously burns the number of write cycles down.

NAND-types:
SLC = Single layer cell = 1bit per transistor
MLC= Multi layer cell = 2bit per transistor
TLC = Tripple layer cell = 3bit per transistor

SSDs are 100% used for those kinds of workloads, most data centers have been moving to SSDs for awhile, the only advantage of platter is price at this point. SSDs even in data center environments are way more reliable and last way longer. I feel like you are underestimating SSDs lifespans or overestimating my use case, if we look at the Intel 6000p, (Best m.2 I have been able to find, and still misses the mark :(.) Its lifetime is 144 TBw, 144TBs of log files, would take a very very long time.

However these log files are being constantly written and rewritten, which would have a spinning platter, constantly spinning, which will cause its failure far sooner then you will hit 144tbw of log files. Raid is an option, however not so much when you realize we are talking about a 1u appliance server, raid is simply not an option.

Having a NAS share, makes sense, until we shoot back to the fact that its a firewall, and if you host your firewall OS on a NAS, you might as well just throw the firewall out the window, as that is a serious security issue.

Which then leads us to a few other considerations. Raid, does nothing for small files, absolutely zero. Raid increases read and write speeds of large files, by striping the data accross multiple drives, you cannot really stripe a 50byte log entry.

On the same token, what actually makes ssds faster with the small files, protects the endurance concern, you have mentioned, which is actually not true. And that is the controller in a Enterprise SSD, stores the data in voltaile memory, until enough files have been gathered to write off to a block. This gives a serious increase to the wear loads, where as a traditional drive would have to write out each file one by one, as they are created.


All firewalls use SSDs, at least high end ones, and lets be honest even commercial routers use flash memory.
I know where your concerns stem from, however they are based on old drives, with old information, and have been changed and no longer applicable in today's drives. Again look at newer articles, data centers are moving to solely SSDs, especially for the likes of firewall and other appliance systems. (Granted Enterprise SSDs, that are different, and don't seem to in the m.2 form factor.)

Now there is a stipulation to all that, I need a Enterprise SSD, with a dram supported controller that is designed for the purpose. And it's not looking like I will be able to find that in m.2 yet, and in that case, my likely best option would be to do a hybrid solution of what you said.

So the SSD will hold, the cache, and the OS (until it goes to ram) however the swap file will be disabled, and logs will be exported to my NAS, where they will be more redundant anyway. As I agree on a consumer SSD not built for purpose, the log files will disengrate the SSD.

Yes, I know that about TLC, I meant SLC, that was a mistype, I had a long weekend new baby born on Friday. Anwyay in the fact that it may be hard to find a SLC m.2, likely will have to settle for MLC anyway.
 
Last edited:
  • Like
Reactions: Thestarkiller32

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
Raid, does nothing for small files, absolutely zero. Raid increases read and write speeds of large files, by striping the data accross multiple drives, you cannot really stripe a 50byte log entry.
If pure small-file write speed is a concern RAID-0 effectively doubles performance (each operation can be dispatched to a separate drive). If purely read speed is a concern, RAID1 allows separate read operations per drive.
And that is the controller in a Enterprise SSD, stores the data in voltaile memory, until enough files have been gathered to write off to a block. This gives a serious increase to the wear loads, where as a traditional drive would have to write out each file one by one, as they are created.
This is simply false: whether writes are cached to DRAM or to a NAND-cased cache (e.g. on newer Samsung SSDs a chunk of TLC is dropped down to SLC to act as cache) is 100% down to controller design. It has nothing whatsoever to do with it being an 'enterprise' drive or not.

The point is moot though: you're dramatically overspeccing the drive for a very simple application. An enterprise grade logging network appliance (e.g. like you'd see from Cisco, Juniper, etc) won't have an 'enterprise grade' NVME SSD inside to capture logs. It'll have (on older devices) a Compact Flash card, or an SD card or it's internal soldered equivalent (eMMC). OS will run from RAM, and the 'cold' OS will be stored on slow NAND or NOR flash.

If you want maximum compactness, then what you're looking for is a DOM (Disc On Module) that plugs straight into the SATA port. If you want one with 'Enterprise Grade' pixie dust, Supermicro make them.
 
  • Like
Reactions: jØrd and Phuncz

Thestarkiller32

Cable-Tie Ninja
Aug 13, 2017
152
102
SSDs are 100% used for those kinds of workloads, most data centers have been moving to SSDs for awhile, the only advantage of platter is price at this point. SSDs even in data center environments are way more reliable and last way longer. I feel like you are underestimating SSDs lifespans or overestimating my use case, if we look at the Intel 6000p, (Best m.2 I have been able to find, and still misses the mark :(.) Its lifetime is 144 TBw, 144TBs of log files, would take a very very long time.

However these log files are being constantly written and rewritten, which would have a spinning platter, constantly spinning, which will cause its failure far sooner then you will hit 144tbw of log files. Raid is an option, however not so much when you realize we are talking about a 1u appliance server, raid is simply not an option.

Having a NAS share, makes sense, until we shoot back to the fact that its a firewall, and if you host your firewall OS on a NAS, you might as well just throw the firewall out the window, as that is a serious security issue.

Which then leads us to a few other considerations. Raid, does nothing for small files, absolutely zero. Raid increases read and write speeds of large files, by striping the data accross multiple drives, you cannot really stripe a 50byte log entry.

On the same token, what actually makes ssds faster with the small files, protects the endurance concern, you have mentioned, which is actually not true. And that is the controller in a Enterprise SSD, stores the data in voltaile memory, until enough files have been gathered to write off to a block. This gives a serious increase to the wear loads, where as a traditional drive would have to write out each file one by one, as they are created.


All firewalls use SSDs, at least high end ones, and lets be honest even commercial routers use flash memory.
I know where your concerns stem from, however they are based on old drives, with old information, and have been changed and no longer applicable in today's drives. Again look at newer articles, data centers are moving to solely SSDs, especially for the likes of firewall and other appliance systems. (Granted Enterprise SSDs, that are different, and don't seem to in the m.2 form factor.)

Now there is a stipulation to all that, I need a Enterprise SSD, with a dram supported controller that is designed for the purpose. And it's not looking like I will be able to find that in m.2 yet, and in that case, my likely best option would be to do a hybrid solution of what you said.

So the SSD will hold, the cache, and the OS (until it goes to ram) however the swap file will be disabled, and logs will be exported to my NAS, where they will be more redundant anyway. As I agree on a consumer SSD not built for purpose, the log files will disengrate the SSD.

Yes, I know that about TLC, I meant SLC, that was a mistype, I had a long weekend new baby born on Friday. Anwyay in the fact that it may be hard to find a SLC m.2, likely will have to settle for MLC anyway.
You could wait for Intel Optane SSDs with 3D cross-point NAND chips, they will have a way greater lifespan than the conventional designs..

Have you also considered a RAM-Disc as a caching option?
 
Last edited:

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
Why not move your logging off your firewall altogether. syslogd and rsyslog both offer robust remote logging functionality relieving your firewall of the task of having to keep any logs on any form local media at all.
 

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8
If pure small-file write speed is a concern RAID-0 effectively doubles performance (each operation can be dispatched to a separate drive). If purely read speed is a concern, RAID1 allows separate read operations per drive.

This is simply false: whether writes are cached to DRAM or to a NAND-cased cache (e.g. on newer Samsung SSDs a chunk of TLC is dropped down to SLC to act as cache) is 100% down to controller design. It has nothing whatsoever to do with it being an 'enterprise' drive or not.

The point is moot though: you're dramatically overspeccing the drive for a very simple application. An enterprise grade logging network appliance (e.g. like you'd see from Cisco, Juniper, etc) won't have an 'enterprise grade' NVME SSD inside to capture logs. It'll have (on older devices) a Compact Flash card, or an SD card or it's internal soldered equivalent (eMMC). OS will run from RAM, and the 'cold' OS will be stored on slow NAND or NOR flash.

If you want maximum compactness, then what you're looking for is a DOM (Disc On Module) that plugs straight into the SATA port. If you want one with 'Enterprise Grade' pixie dust, Supermicro make them.

Oh ya I agree that I am overspeecing the drive for just the Firewall Aspect. However I am not for the Squid Caching, The SSD is for the Caching more so then the logs. And the fact I do not have room for a traditional SSD.

That said, I had very much considered Jords idea, and may very well go down that route. I order a cheap SSD, from eBay it's an older 16gb model SanDisk, and is 2242 so will fit the board perfectly.

Now there is a issue, PFsense has dropped support for Nanobsd as of 2.4, only full install of AMD64 are supported. So, that is out (though PFsense still loads into ram no matter what)

So if I did go the other route, it could be done. I could run PFsense off the 16gb SSD and use it like a CF, I could then set the logs to go to Ram as well, and then have them dumped to my NAS nightly for long term storage if wanted.

I think, I can move the Squid cache to my NAS as well, and If I can, that would be much better anyway, as I can cache more, for longer, on faster interfaces (10gb ports on the NAS) which does have a raid array. (Again, I know about raid 0 lol, it does nothing for small files, IE 4k, only for large sequential data).

I may actually go that route, even though with the current cost of DDR4 the SSD is cheaper :p.

Also I think you confused what I mean about the Enterprise drives. For 1 the logs, are less than the size of a block, the SSD doesn't care, it would write each log to a block and then that blocks write endurance would go down by 1, even though it didn't have the full black even used. With the DRAM SSDs made for it, they would wait until there was enough logs to take up an entire block, or more, and then write them. On top of the fact the main concerned feature of Enterprise drives I was after, was power loss protection, to ensure that the data was not lost.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
it would write each log to a block and then that blocks write endurance would go down by 1, even though it didn't have the full black even used.
That's not how SSD endurance works. NAND is not degraded by write operations, but by erase operations. You can perform multiple sub-block writes to a block without needing to erase it, but you need to erase it before you can modify any contents in the block. 'write amplification' occurs when you need to erase a block to 'overwrite' part of it which then requires data to be shuffled to another block, which itself needs to be overwritten (incurring another erase operation), etc.If you're not doing any reading or deleting, you can write an entire drive's worth of data to an empty SSD without incurring a single erase operation.
 
  • Like
Reactions: Biowarejak

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8
That's not how SSD endurance works. NAND is not degraded by write operations, but by erase operations. You can perform multiple sub-block writes to a block without needing to erase it, but you need to erase it before you can modify any contents in the block. 'write amplification' occurs when you need to erase a block to 'overwrite' part of it which then requires data to be shuffled to another block, which itself needs to be overwritten (incurring another erase operation), etc.If you're not doing any reading or deleting, you can write an entire drive's worth of data to an empty SSD without incurring a single erase operation.

I didn't know it was the erase good to know. What I said still applies though right?

A block is 512bs a log file is 100bs, and it is deleting a block to write a single log, that equals more erasing.

At any rate I have decided on a drive, and with its 3880TBW lifespan, I think all will be well :p. Its a Swissbit SSD made for purpose, its SLC and its expensive lol. It is however made for Enterprise Write Use.

It again also has the features I need, like not being in my way in the tiny 1u case, Power Loss protection (which you are not going to find on a 2.5 inch hard drive), ect ect.

Though they do have a cheaper model, that has MLC with the highest endurance I have seen 600tbw for a 60gb SSD, and all the features of the SLC version for much less money, may give that a shot.

Both drives are made by Swissbit, and are pretty expensive.
 
Last edited:

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
A block is 512bs a log file is 100bs, and it is deleting a block to write a single log, that equals more erasing.
Say we're starting with a totally full drive, but that any block can be overwritted (i.e. you've filled the drive with logs, but have 'deleted' them in the OS but not given the drive time to run TRIM yet). You have a full 4kb block, and a 100 byte log. The block is erased once, and you now have an empty 4kb block. You can now write 400 of those 100 nyte log files to the block before it is full, and you do not need to erase it again just to write to the empty area.
If, however, you wanted to change a single bit in a single one of those written logs, then you need to erase the block and re-write it all over again with the 'new' version (in reality, drives all implement wear-levelling algorithms and will instead just write all the data to a fresh block, and merely mark the 'old' block for alter overwriting).
Having a DRAM cache does nothing to actually affect this write endurance, except for the edge case where the host happens to write a block, then modifies it within the tiny time window (nanoseconds to microseconds) between the data being written to the DRAM and it being offloaded to the NAND. The presence of a DRAM cache is to increase write speed: both to act as a direct write cache (i.g. with 1Gb of DRAM, your first 1GB of writes will be limited by DRAM speeds rather than NAND speeds) but also to allow intermittent NAND writes to be buffered so they can be spread amongst all NAND controllers interface channels rather than only running one channel at a time (which also makes the controller more power efficient by reducing wake time).
 

Cyber Locc

Caliper Novice
Original poster
Aug 16, 2017
30
8
Say we're starting with a totally full drive, but that any block can be overwritted (i.e. you've filled the drive with logs, but have 'deleted' them in the OS but not given the drive time to run TRIM yet). You have a full 4kb block, and a 100 byte log. The block is erased once, and you now have an empty 4kb block. You can now write 400 of those 100 nyte log files to the block before it is full, and you do not need to erase it again just to write to the empty area.
If, however, you wanted to change a single bit in a single one of those written logs, then you need to erase the block and re-write it all over again with the 'new' version (in reality, drives all implement wear-levelling algorithms and will instead just write all the data to a fresh block, and merely mark the 'old' block for alter overwriting).
Having a DRAM cache does nothing to actually affect this write endurance, except for the edge case where the host happens to write a block, then modifies it within the tiny time window (nanoseconds to microseconds) between the data being written to the DRAM and it being offloaded to the NAND. The presence of a DRAM cache is to increase write speed: both to act as a direct write cache (i.g. with 1Gb of DRAM, your first 1GB of writes will be limited by DRAM speeds rather than NAND speeds) but also to allow intermittent NAND writes to be buffered so they can be spread amongst all NAND controllers interface channels rather than only running one channel at a time (which also makes the controller more power efficient by reducing wake time).

Okay, I get what your saying, good things to know thank you very much :).

Since you seem to know alot about SSDs, do you think a Intel 3500 would be good enough for PFsense writing a ton of logs? Its got 200tb written endurance for the 120gb model, cheaper than than 60gb swissbit and better power loss protection?

The SLC, is probably the best option, but I feel like that is overkill. Suricata will be making a ton of logs, but, I don't feel like it will make that many in a few years (200tbs).

On top of that, I won't be using anywhere near the 120gb so that should also help increase number of writes, as I will use like 20-30gbs tops. If I move my squid to NAS, which I likely will as I might as well take advantage of my 10gbs NAS nics when accessing cache.