• Save 15% on ALL SFF Network merch, until Dec 31st! Use code SFF2024 at checkout. Click here!

Anandtech Samsung SM951 M.2 PCIe SSD

Anandtech posted their review of the NVMe and AHCI editions, although we still should be mindful that the NVMe driver in Windows 8.1 is being blamed for sub-optimal performance. It is expected to perform better with the Windows 10 driver.

Availability:
The SM951-NVMe is an OEM part, meaning that availability is very restricted. The drive is listed by a handful of online retailers, but none of them seem to have it in stock yet. RamCity is expecting stock in mid-July, but told us that even that is uncertain because its distributors are still saying that the NVMe version is in sampling stage with no schedule for high volume availability. We got our 256GB sample directly from Samsung, hence the early access, as it seems that there is no way to buy the SM951-NVMe at this point. I will provide an update when I hear more about the availability.

Part of the conclusion:
In terms of performance, the NVMe version of the SM951 offers an upgrade over its AHCI sibling. The average data rate (i.e. large IO performance) isn't dramatically better compared to the AHCI version, but when it comes to small IO latency the SM951 and NVMe in general show their might. Typically the NVMe version offers about 10-20% improvement in average latency over the AHCI version, which is a healthy boost in performance given that the two utilize identical hardware.


If I have to post one image, it's the above one. While it's still handicapped by the crude Microsoft driver, it still has an amazingly low latency and high random read-write performance, which is much more relevant to true performance than continuous throughput.

I can't wait for all these M.2 NVMe drives showing the world that small can also mean blazingly fast.
 
  • Like
Reactions: jeshikat

jtd871

SFF Guru
Jun 22, 2015
1,166
851
I'll still want spinners for long-term archival storage - like on my NAS, but I will likely be upgrading/buying new computers with fast and "smaller" 256G-512G SSDs.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I'm currently running a 1TB SSD as an OS/boot/main data drive, with a 3TB HDD for more rarely used data and a regular backup of the main drive. Tried running without a local HDD and just a NAS, but connecting to the network drives adds a big delay to boot, and the speed is noticeable slower even over gigabit ethernet (mainly due to random access latency). A 1TB m.2 SSD and a 2TB/3TB 2.5" HDD would be the the smallest solution without using external devices.
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
I'm actually in the overlap category, however small. I'd still use a 512GB or 1TB M.2 for OS, but as I don't use cloud storage for my data, the additional storage is a necessity. Not necessarily 4TB though - 2TB could be potentially feasible - the new lineup should drop prices on the current 2TB drives.

Yeah that's where I'm standing as well.

Right now an mSATA SSD + 2TB HDD is enough, but I guess when prices drop to the current level of that combination, I'd be going ultra fast M.2 PCIe x4 SSD + 2TB SATA SSD. I guess that will take quite some time, though.

I'm currently running a 1TB SSD as an OS/boot/main data drive, with a 3TB HDD for more rarely used data and a regular backup of the main drive. Tried running without a local HDD and just a NAS, but connecting to the network drives adds a big delay to boot, and the speed is noticeable slower even over gigabit ethernet (mainly due to random access latency). A 1TB m.2 SSD and a 2TB/3TB 2.5" HDD would be the the smallest solution without using external devices.

Yeah that's why I'm reluctant to go the NAS route. I view NAS as an additional layer in the memory hierarchy between the regular HDDs and external tape archives, so for stuff that is so rarely used I can't justify wasting space on my internal drives, but that I want to keep around for whatever reasons.

My guess is it fits more ultrabooks and tablets due to its decreased Z-size by going single-sided.

Good point, didn't think about that.
 

jtd871

SFF Guru
Jun 22, 2015
1,166
851
NAS is for mass storage and backup/archive. If you can't store the stuff you need locally day-to-day on half a terabyte, then you are truly a power user.
 

Phuncz

Lord of the Boards
Original poster
SFFn Staff
May 9, 2015
5,959
4,957
I have everything "file"-related on my NAS. 80-120MB/S is plenty for files, unless your job depends on your completion time if bottlenecked by throughput and you need to do your time-critical job on your only SFF PC in the house. And then you'd still have 1-2TB 2,5" SSD options.
And if that's not enough, 10Gbit LAN and Thunderbolt (10/20/40Gbps) exist.

For a while SFF has not been limited by the number of storage bays but merely with old habits.
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
I suspect that the overlap between people tempted by that, and those tempted by these M.2 SSD's, is very small. You either feel that you need gobs of local storage, or you don't.

To wit, my one and only computer currently is a Macbook Pro with a 512GB SSD, and while it can be a bit cramped at times, it is never "not enough" for my uses - which are hardly lightweight, as they include development (both app and web), photography, image and basic video editing, virtualization, dual-booting (Win10), gaming... and on and on and on :p

What makes this feasible is the fact that I store all of my media in the cloud (if I'm not streaming it), and the fact that I simply don't need to access very much data at any one moment. However, my workload is very sensitive to the speed in which that information can be accessed, and I care about the volume and power implications of my storage. So, PCIe flash-based storage is unequivocally ideal.

The transition to universal solid state media is inevitable - the cost and capacity gains made by SSD's have continually outpaced those of HDD's, and SSD's are simply much more practical. However, a side effect of this transition - that I hope sticks around - is the perception that less, faster storage is probably better for most people than more, slower storage. I'd be willing to bet that people practically never touch over half of the data they store locally, and a consequence of this is uniformly decreased performance at a very high cost (on a relative basis). Plus, there's actually a lot of value in having not quite enough space for absolutely everything, since it forces you into the exercise of prioritizing what it is that you actually care about and need.
I personally don't see the need for 4TB or even 2TB of SSD storage. Having that much storage on your computer can still be useful, but only a portion of it only gets accessed at any one time, so I'd much rather spend the same money on a much faster, but smaller SSD and have a larger, but slower hard drive (or even a cheap but huge SATA SSD). It's hard to notice any difference between a fast SSD and a slow HDD opening My Documents.

Admittedly, though, I'm all over a 1TB M.2 drive since that actually is enough to hold all of my local files and lets me not have to worry about separate drive partitions or SSD caching, and lets me lose the drive bays. A one device solution is of use to me even if I don't need the extra speed for half of what's being stored.
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
I personally don't see the need for 4TB or even 2TB of SSD storage. Having that much storage on your computer can still be useful, but only a portion of it only gets accessed at any one time, so I'd much rather spend the same money on a much faster, but smaller SSD and have a larger, but slower hard drive (or even a cheap but huge SATA SSD). It's hard to notice any difference between a fast SSD and a slow HDD opening My Documents.

Admittedly, though, I'm all over a 1TB M.2 drive since that actually is enough to hold all of my local files and lets me not have to worry about separate drive partitions or SSD caching, and lets me lose the drive bays. A one device solution is of use to me even if I don't need the extra speed for half of what's being stored.

You're going to have to clarify. You don't see the need for a 2TB or 4TB SSD storage, but you've subsequently mentioned you'd buy a mechanical drive.. or a cheap but huge SATA SSD. As the 2TB and 4TB SSDs referred to are SATA SSDs.

Are you against large M.2 drives, or the notion of SSD storage for purposes of speed?

The benefits of SSD storage to me are:

1) No moving parts (mount anywhere)
2) Lower power usage
3) Smaller form factor

All are significant over a traditional 3.5" HDD. Especially where SFF is concerned.
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
You're going to have to clarify. You don't see the need for a 2TB or 4TB SSD storage, but you've subsequently mentioned you'd buy a mechanical drive.. or a cheap but huge SATA SSD. As the 2TB and 4TB SSDs referred to are SATA SSDs.

Are you against large M.2 drives, or the notion of SSD storage for purposes of speed?
I never said I was against larger capacity M.2 drives, in fact I did mention that would be nice, and the idea of being against the speed of SSDs in general is ridiculous. I was commenting that there isn't actually much point to that much capacity on new, faster tech.

The speed of an SSD has minimal impact on pure storage, only access, so it's much better to store general files on on a cheaper drive and use the money saved to get faster access for the data that does benefit from the speed increase. Also given M.2 has a higher performance cap, I'd much rather see them focus on that for performance increases and in the 2.5" space increase the storage per $ and possibly actually make it more price competitive with HDDs for pure storage.

As for the cheap but huge SSD, the cheap part is more important. Again when you have multiple TB of data MOST of that isn't frequently accessed so I see no reason to spend nearly a thousand dollars on it which is what large capacity drives seem to cost right now since everyone is so focused on maximum speed and using the latest, most expensive chips. I'd much rather slower, but cheaper drives, and again, move the money to improving the speed where it actually matters. It's much better price for performance.


The benefits of SSD storage to me are:

1) No moving parts (mount anywhere)
2) Lower power usage
3) Smaller form factor

All are significant over a traditional 3.5" HDD. Especially where SFF is concerned.
I think you are conflating HDD with 3.5" which isn't true. Sure, most hard drives are still 3.5" because there's not much reason to go to smaller sizes for desktop machines, but smaller form factors predate SSDs, and you can find large capacity hard drives in 2.5" for pretty cheap (tip: shop for laptop/notebook hard drives).

As far as your points are concerned:
1. having moving parts doesn't affect that much, most hard drives can be mounted in pretty much the same spots and orientations as a 2.5" SSD. The disadvantage of moving parts is the lack shock resistance and access time
2. There isn't THAT huge a difference really
3. Again you can find HDDs in 2.5" so drive bay SSDs have no advantage while M.2 does.

If we want to go as small as possible M.2 has a clear advantage over ANY drive that mounts in a bay.
Given that, larger capacity M.2 would be nice for SFF where you have no bays, but where you do, I don't see much point to the higher price bay mounted SSDs unless you are in money no object territory.
 

iFreilicht

FlexATX Authority
Feb 28, 2015
3,243
2,361
freilite.com
The benefits of SSD storage to me are:

1) No moving parts (mount anywhere)
2) Lower power usage
3) Smaller form factor

All are significant over a traditional 3.5" HDD. Especially where SFF is concerned.

So let's say we were talking about 2.5" HDDs instead of 3.5". How high is the difference really?

1) That's advantageous in a laptop, but if you use HDDs that are turned off, you don't gain anything. And why can you mount SSDs anywhere but HDDs not? Except we're talking about thermal stress.
2) How high is the difference in reality here?
3) 2.5mm less thickness isn't of much concern for most people, even within SFF. I guess if you count 15mm HDDs, you get double the amount of drives with SSDs in the same space, but as 9.5mm HDDs are available, I wouldn't count that. A great benefit of SSDs is that you can remove them from the chassis and have even smaller storage as a result, though.
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
I ignored 2.5" HDDs as they traditionally did not have the reliability of the 3.5" but carried all of the other same problems - minus size. If that is no longer an issue, then sure, it's a viable option for most - but again, not for me.

Now BirdofPrey's post has me thinking of M.2's in Raid 1, though. :cool:
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
Still, I think drive bay mounted SSDs are no less a dead end than HDDs with as fast as M.2 seems to be progressing

Now BirdofPrey's post has me thinking of M.2's in Raid 1, though. :cool:
Can that work? Does NVMe have support for such a thing?
PCIe 3.0 x4 isn't saturated just yet. I wonder if they could do RAID on a single M.2 module. Now THAT's cooking with gas.
 

Vittra

Airflow Optimizer
May 11, 2015
359
90
Yes, but the chipset plays a role, as such support has not been voiced for X99 boards with NVMe as it has with Z170. In theory, PCI-E raid should be possible even between the different methods (U.2/M.2/PCI-E). A lot of the people messing about with it now are running into issues though, so it's going to take some time yet to work out the details - some UEFI fine tuning to be done, I suspect.
 

|||

King of Cable Management
Sep 26, 2015
775
759
U.2 drives will be kept alive by the enterprise users. They will want serviceable drives on the front of the server that aren't easy to damage.

X99 is going to be out in the cold when it comes to M.2. They are limited with the DMI 2.0 interface and PCI-e 2.0 High Speed I/O coming from that chipset without support for Intel Rapid Storage Technology. With the high-end chips, you're going to have to use PCI-e 3.0 lanes directly from the processor, and so far boards aren't configured for multiple M.2 cards, yet. But I've heard that BIOS' will be updated to allow them to support multiple drives...it's just a question of how many PCI-e lanes will need to be taken up to support them.
 

BirdofPrey

Standards Guru
Sep 3, 2015
797
493
U.2 drives will be kept alive by the enterprise users. They will want serviceable drives on the front of the server that aren't easy to damage.
Maybe. I'm interested to see how things play out.

Historically, enterprise users have gone with SCSI interfaces (when users transitioned to SATA, enterprise was transitioning to SAS) since it was designed with their needs in line by having higher reliability better commands and better queuing. It is also designed to be very expandable for adding gobs more devices and use the same protocol over other interfaces. Though it's newer than NVMe, the SCSI Trade Association has been developing SCSI Express (aka SCSI over PCIe) to accomplish the same goals as SATA Express/U.2 (namely using a PCIe interface) while retaining the expandability and reliability of SCSI.

I think the issue is going to be with the protocol. NVMe was designed specifically because AHCI (which can both be used with the various PCIe interfacing methods) was designed with spinning hard disks in mind, and adds too much overhead while limiting random access. the SCSI protocol has similar issues, so I think it might depend on what changes happen in that area and if the overhead and other features are worth any speed loss which gets ultimately chosen.

It might be too early to tell.
 

|||

King of Cable Management
Sep 26, 2015
775
759
I'm looking at this pamphlet they have and it isn't entirely clear what is going on, but they are using the SFF-8639 Mini-SAS connector with PCI-e x4 electrical connectivity as the U.2 drives do today. The confusing part to me is the connection back to the SAS controller...do they have a separate connection for that? The only benefit I see here is the interaction with other SAS drives in a large array.

Once flash gets close to parity with hard drives on the cost per byte basis, SAS and SATA will essentially become irrelevant, despite the respective organizations best efforts to try to prove themselves otherwise. The best way to handle data integrity in an NVMe flash array with PCI-e connectivity is to put the logic in PCI-e switches that manage numerous interfaces with the drive controllers. This minimizes protocol overhead, reducing latency and increasing throughput, helping applications with intensive I/O needs.