CPU More info on Qualcomm's Centriq 2400!!

Do you think that we will see Qualcomm CPUs on desktop?

  • Yes

    Votes: 3 42.9%
  • No

    Votes: 4 57.1%

  • Total voters
    7

AleksandarK

/dev/null
Original poster
May 14, 2017
703
774
Presented at Hot Chips 29, the Qualcomm Centriq 2400, yesterday got more details announced.
It features:
10nm design,
48 cores,
60MB of L3 cache,
12MB of L2 cache,
Six channel DDR4 memory running at 2667Mhz,
Eight SATA III ports,
Ring-bus core interconnection,
And LGA socket.

I am just wondering when we will see desktop Qualcomm CPUs? It could be really interesting to see how the market responds to this.

Source: ServeTheHome
 
Last edited:

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
I am just wondering when we will see desktop Qualcomm CPUs? It could be really interesting to see how the market responds to this.

I would love to see something, anything, move to desktop to challenge x86's dominance but im not confidant we will any time soon. Apple abandoned PowerPC, Microsoft abandoned Windows RT, etc.
 
  • Like
Reactions: AleksandarK

AleksandarK

/dev/null
Original poster
May 14, 2017
703
774
I would love to see something, anything, move to desktop to challenge x86's dominance but im not confidant we will any time soon. Apple abandoned PowerPC, Microsoft abandoned Windows RT, etc.
MIPS is getting more present and it is on a good way to replace x86. The need of licencing for x86 is its biggest drawback. Intel is simply hold the monopoly here and companyes like ARM, Qualcomm etc. are trying to find a way to bypass it.

I hope MIPS works out to be on top and allow more competition on the market, which will allow every CPU manufacturer to offer at least something. Everyone will benefit, exept Intel!!!
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359
MIPS is a really interesting architecture but right now I feel like ARM has a better chance of making it onto the desktop w/ it just dominating in mobile and starting to creep into server. Basically anything making it into desktop and competing w/ x86 can only be good for the consumer, more competition is desperately needed there
 
  • Like
Reactions: AleksandarK

zovc

King of Cable Management
Jan 5, 2017
852
603
So, just to make sure I'm understanding... Qualcomm is trying to bring a CPU to the enterprise market? That's exciting! Do we know anything about what chipsets/motherboards they are using?

I know there's a lot of high-level (or low-level?) issues with "just" running Windows on different processor architectures, but I never understood why these insanely efficient and curious chips in mobile devices never started being used in laptops or even OEMs. I definitely think that affordable CPUs and motherboards (~$75-150 for the combo) would have a good share of the market. Many Android devices' CPUs and integrated graphics seem totally suitable to running a HTPC-type environment and things like the Nvidia Shield TV are a great example of that... If a failed market. Why, just the other day I was looking to see if anyone had gotten Windows 10 working on a Shield TV (No, they haven't. Not that I saw.) because it seems like a pretty ideal and affordable HTPC.

With hopes this doesn't distract from the intended course of the thread, could someone give me a brief explanation on why x86 is a bad thing--or why manufacturers can't just adopt it? I'm asking as a total layperson here.
 
  • Like
Reactions: AleksandarK

AleksandarK

/dev/null
Original poster
May 14, 2017
703
774
So, just to make sure I'm understanding... Qualcomm is trying to bring a CPU to the enterprise market? That's exciting! Do we know anything about what chipsets/motherboards they are using?

I know there's a lot of high-level (or low-level?) issues with "just" running Windows on different processor architectures, but I never understood why these insanely efficient and curious chips in mobile devices never started being used in laptops or even OEMs. I definitely think that affordable CPUs and motherboards (~$75-150 for the combo) would have a good share of the market. Many Android devices' CPUs and integrated graphics seem totally suitable to running a HTPC-type environment and things like the Nvidia Shield TV are a great example of that... If a failed market. Why, just the other day I was looking to see if anyone had gotten Windows 10 working on a Shield TV (No, they haven't. Not that I saw.) because it seems like a pretty ideal and affordable HTPC.

With hopes this doesn't distract from the intended course of the thread, could someone give me a brief explanation on why x86 is a bad thing--or why manufacturers can't just adopt it? I'm asking as a total layperson here.
They dont have chipset. It is full SoC. This is a motherboard i found.

Now on to the x86. It is made and developed by Intel. Intel licensed x86 to AMD and their own use. AMD made 64bit(introduced with Athlon 64) and they licensed that to Intel. So those two have an agreement to use each others technology. Intel it self wont let anyone else besides AMD, use x86 licence and they hold a monopoly there.
Some other architectures, which mostly are very very small amount of market, are MIPS, RISC, CISC, PowerPC, ARM etc.

That is why we dont see much CPUs running on ARM processors, even thought they are very very power efficient. Plus, Windows only supports a few Architectures so not everything can run on it. That is why, this Centriq 2400 runs only on Windows server and Linux(mostly Ubunty).
I would very much like to see something like NVIDIA Shield running on Windows, as(i believe) the CPU running on it, would be able to run games etc.
I found also this video which shows 24 Core DESKTOP ARM developer box.

Difficulties when manufacturing a desktop CPU, would be next:
1. x86 licence
2.64-bit licence
3.Hours of coding for Windows
4.Todays processors are made to be 128-bit, but use 64 in order to better run with OS, so even more coding
5. With some of my estimations, you will need to write at least 7000 ROWS(!) OF CODE in HDL( VHDL or Verilog)!!! So it will take you at least a year with around 10 people writing whole day.
6.Debugging and manufacturing problems with new design
7.The pure COST of making such a big jump to new market( marketing, paying the foundry to adapt its machines to your needs, making a socket, making a chipset, designig a standard IO, calculating power and TDP, finding a compatible memory speed, making some power saving features, and working with the next generation technology because if you work with current gen, by the time you finish your CPU, you will get obsolete product).
 
Last edited:

Kmpkt

Innovation through Miniaturization
KMPKT
Feb 1, 2016
3,382
5,936
All I know is the more the merrier. I love competition because it means ultimately we get a better product with a lower price. I'd also love to see how badly Intel's PR shits the bed with two competitors instead of one.
 

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
The problem with x86 takeup is less licensing, and more the sheer difficulty of independently implementing the instruction set in a manner that is even vaguely competitive with Intel and AMD without a truly insane amount of investment and a ridiculous degree of tiptoeing around existing solutions (often meaning the optimum implementation is unavailable). In the early years of x86, IA-32, and even AMD64, there were large numbers of clones and 'compatibles' available. As we see with current Via CPUs (yes, they're still going!), even at very low power levels the perf/watt is behind Intel (and AMD's newer) CPUs.
Under the hood, the line between RISC and CISC instruction sets has long ago blurred into irrelevance. RISC no longer has any performance or cost advantage over CISC due to the available of fast RAM and integral caches, and in terms of perf/watt it's pretty much a wash. A RISC core can be smaller due to the smaller instruction decoding block (with modern RISC and CISC designs the actual execution blocks are mostly the same size, even a large instruction set boils down to mostly the same basic operations outside of SIMD) so it's a good fit for milliwatt-range SoCs, but even going up to single-digit-watt ranges x86 is competitive. And when you want to scale performance up, you very quickly hit a clock-speed limit and CISC generally ends up with a slight edge. We see 'big chips' with clusters of small ARM cores, but we don't see ARM cores scaled up to high power and performance levels (the common stores of "Apple A9 as fast as desktop chip" and similar are rather misleading, based on comparing a single synthetic mobile benchmark program).

For anyone who is not effectively writing their own OS (or highly customising it) to run their own programs, any architecture that is not x86 is effectively not an option. And even in fields where that IS the case like HPC, x86 is by far the dominant architecture (though in a lot of those cases the CPU used is just a host for the GPUs that do the actual crunching. Jobs tend to either be highly serial or embarrassingly parallel, with few 'just a bit parallel' jobs that actually fit well onto multi-core CPUs). The desktop can be written out entirely for adopting a new instruction set: it's been impossible to shift people to a new OS even on existing systems, let alone get them to shift their systems, OS and all their programs.