I don't think the wasted CPU cycles is that much of a fair metric anymore. Yes, most people have more memory and CPU than they need, but that's because it's cheap to do so.
My point wasn't that "wasted cycles" are bad for users, it was that making further improvements to CPUs and memory for those users is a bad choice, because you're spending money on performance you know they won't ever notice. It's a waste of resources. Intel could make
infinitely better CPUs and start selling them tomorrow, and those users would have almost no perceptible performance improvement from that upgrade because it's the storage they're waiting on.
Comparatively, if you were able to install mass storage that performed like RAM, that would have a
monumental impact on performance that would transform almost any interaction into a practically-instantaneous one. In every instance where storage imparted a delay, there would now no longer be one.
I think you misunderstood the point I was trying to make (and perhaps made poorly). This is what you said:
While I agree more speed is always nice to have, at this point, I can't say it's a priority. Stuff like backups and virus scans could benefit from more performance, but most everything else is bottlenecked by other concerns.
My counterpoint is just that this is simply untrue for the majority of users. Most people are bottlenecked by their storage far more substantially than any other component of their system, and you seem to agree with that when you say that most users aren't saturating their memory or CPU performance. So improving storage speed absolutely is a priority, because it's a massive problem! It's
the primary bottleneck for the majority of users and tasks now. Improving access speed to stored data is now the
only way to meaningfully improve the speed of those things. And, even though something like PCIe flash is far superior to HDDs, that performance delta only scratches the surface of the performance improvements we can realize with storage in general. The opportunity isn't PCIe-like speeds, it's memory-like speeds.
So, wasted CPU and memory cycles aside, these faster speeds of these newer SSDs don't account for all that many real world seconds saved, especially when they seem to be prioritizing sequential read access rather than random access of which the latter generally accounts for more I/O.
That's why I'm saying people should invest in and demand faster storage
That's why I said this:
Only folks who are constantly saturating compute performance, ironically enough, should be less concerned about storage performance relative to compute. Everyone else should be demanding that storage get a lot faster before processors do, and Intel's investment in things like XPoint are a very public recognition by the company of this current dynamic in today's computers.
The response to "random reads are what need the improvements more" isn't to say "well ok, those aren't so big today so let's give up and throw that money at CPU cycles we won't be using or faster memory that won't make a real difference". It's to demand that storage be faster!
Now I am not arguing that these faster SSDs are a bad thing, and they shouldn't focus on them; I'm arguing that it'd have more utility to consumers if a higher amount of that performance were available in affordable, mass storage
Not necessarily. If you took a single 16GB or 32GB die of XPoint storage (which would provide RAM-like speed), specifically tasked it with caching the data most frequently hit by random reads, and then paired that with the cheapest SATA SSD you could find, you'd probably end up with far superior performance when compared to spending the same amount of money on a PCIe SSD that has 3-5x the performance of that SATA drive.
Just to put some numbers to this: if we say that a cheap SATA SSD costs $0.25/GB, a high-performance PCIe drive costs $0.5/GB, and (for the sake of argument) XPoint ends up costing an insane
$7.50/GB -
fifteen times more than PCIe flash - look at how much a 1TB solution costs you:
32GB XPoint ($240) + 1TB SATA SSD ($200) =
$440
1TB PCIe SSD =
$500
Even if XPoint is ludicrously expensive, and even if you have large storage needs, it can be a better perf-per-dollar investment when paired with cheaper/slower storage. So the immediate focus shouldn't be on getting it to price parity with current SSD's, it should be on getting it to the market ASAP.
EDIT: The other thing, too, is that Intel may vary well consider providing a cache of this sort as an option for their board partners to install directly on motherboards, and then provide either dedicated PCIe lanes for that pre-installed die, or a whole new interface. That would neatly resolve any logical interface or other issues/impediments since they could control the entire solution... as well as boost their platform and provide a pretty serious reason for folks to upgrade to a new generation of silicon.