Recently, I've been entertaining myself with an idea of what it takes to downsize the current generation "top tier desktop" to the lowest practical limit with current technology.
My line of thinking was following, for hardware internals, there are two limiting factors - planar size limits (boards), and volume limits (cooling systems, power supply parts)
ICs take fairly low volume, and all ICs in your PC will still take close to nothing in volume if stacked, but that is not usually practical for reasons of thermal, signal, power limits. The same is true for planar sizes, most of board surface is taken for signal routing, and ICs are hardly taking more than 10%. Putting everything on a single piece of silicon is only possible with a lot of compromises. You are not going far from a smartphone SoC level of performance even if you soup up its constituent parts — bottlenecks appear very fast. On thermals, we can't go further than 100W per cm2, that's the limitation of the silicon, on I/O it's around 300 contacts per cm/2. Silicon interposers or Intel EMIB substrate allow you to increase I/O density tenfold, improve thermals a bit, and reduce the chip size (less per-pad area needed,) but they are rather costly.
What if you put every major IC on an interposer along with HBM2? To provide enough of bandwidth for a GPU, 4096 memory will be a requirement, that's 4 HBM2 stacks. A CPU memory bandwidth can be easily saturated even by a single stack, but you will really need 2 of them just for memory volume. There is a huge incentive to somehow make the CPU and GPU to share a single memory controller. But to keep things simple, lets just assume 6 stacks.
With all hot, high frequency parts on a single package, the rest of the motherboard should be an easy thing to do. No need for 32 layer pcbs to route memory, nor micrometre level precision. The whole package should not take much more than the area of AMD Threadripper, though you will have to put decoupling caps very tightly or bury them into pcb.
Having routing area for memory be reduced dramatically and doing away with separate PCB for graphics, and SDRAM DIMMs, should help a bit with cost, but more importantly lets us say bye to out of plane PCBs and height. The few discreet parts remaining on the motherboard are ICs for I/O, ports and power.
The power will still be a pita, all "fast and hot" silicon will be running at around 1V, and can easily consume half a kilowatt, and will need current with very very low ripple. I'll follow on how to deal this in a later post.
My line of thinking was following, for hardware internals, there are two limiting factors - planar size limits (boards), and volume limits (cooling systems, power supply parts)
ICs take fairly low volume, and all ICs in your PC will still take close to nothing in volume if stacked, but that is not usually practical for reasons of thermal, signal, power limits. The same is true for planar sizes, most of board surface is taken for signal routing, and ICs are hardly taking more than 10%. Putting everything on a single piece of silicon is only possible with a lot of compromises. You are not going far from a smartphone SoC level of performance even if you soup up its constituent parts — bottlenecks appear very fast. On thermals, we can't go further than 100W per cm2, that's the limitation of the silicon, on I/O it's around 300 contacts per cm/2. Silicon interposers or Intel EMIB substrate allow you to increase I/O density tenfold, improve thermals a bit, and reduce the chip size (less per-pad area needed,) but they are rather costly.
What if you put every major IC on an interposer along with HBM2? To provide enough of bandwidth for a GPU, 4096 memory will be a requirement, that's 4 HBM2 stacks. A CPU memory bandwidth can be easily saturated even by a single stack, but you will really need 2 of them just for memory volume. There is a huge incentive to somehow make the CPU and GPU to share a single memory controller. But to keep things simple, lets just assume 6 stacks.
With all hot, high frequency parts on a single package, the rest of the motherboard should be an easy thing to do. No need for 32 layer pcbs to route memory, nor micrometre level precision. The whole package should not take much more than the area of AMD Threadripper, though you will have to put decoupling caps very tightly or bury them into pcb.
Having routing area for memory be reduced dramatically and doing away with separate PCB for graphics, and SDRAM DIMMs, should help a bit with cost, but more importantly lets us say bye to out of plane PCBs and height. The few discreet parts remaining on the motherboard are ICs for I/O, ports and power.
The power will still be a pita, all "fast and hot" silicon will be running at around 1V, and can easily consume half a kilowatt, and will need current with very very low ripple. I'll follow on how to deal this in a later post.
Last edited by a moderator: