SFF.Network [SFF Network] End of an era? Intel’s tick-tock CPU development ending

As reported by PC Perspective, Intel is ending their “tick-tock” CPU cadence, where they would alternate between a new manufacturing process node (tick) and a new architecture (tock). Intel has been keeping to their schedule for quite a few years now but recently as they’ve moved towards sub-10nm processes their release schedule has slipped, most notably with Broadwell. Broadwell was the shrink to 14nm but the desktop chips released just a few months before it’s tock successor, Skylake. With the previous cadence, Skylake would have been followed by a shrink to 10nm but instead we’re getting Kaby Lake, a refinement of the Skylake architecture, still at 14nm.

Read more here.
 

Soul_Est

SFF Guru
SFFn Staff
Feb 12, 2016
1,536
1,928
This is a good thing. Intel had been acting similar to Sony in that they bounce from one thing to another while pushing the boundaries of what's possible (or trying to). They have also been resting on their laurels in that they did not optimize their designs much for efficiency and performance as they figured that the process node shrinks would help quite a bit in that regard. AMD and NVIDIA meanwhile have had to wring as much performance as possible out of the 28nm process node and did a tremendous job at it too. It's about time that Intel start to do the same for each viable process node.
 

jØrd

S̳C̳S̳I̳ ̳f̳o̳r̳ ̳l̳i̳f̳e̳
sudocide.dev
SFFn Staff
Gold Supporter
LOSIAS
Jul 19, 2015
818
1,359

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
With EUVL still in development and far from production ready, we're probably going to see an extended repeat of GPUs being stuck on 28nm and refining their design in the CPU realm.
 

PlayfulPhoenix

Founder of SFF.N
SFFLAB
Chimera Industries
Gold Supporter
Feb 22, 2015
1,052
1,990
This is a good thing. Intel had been acting similar to Sony in that they bounce from one thing to another while pushing the boundaries of what's possible (or trying to). They have also been resting on their laurels in that they did not optimize their designs much for efficiency and performance as they figured that the process node shrinks would help quite a bit in that regard. AMD and NVIDIA meanwhile have had to wring as much performance as possible out of the 28nm process node and did a tremendous job at it too. It's about time that Intel start to do the same for each viable process node.

I think this is a very important insight to make. Intel hasn't yet been forced to do a lot of the wizardry that nVidia and AMD had to execute for their current-gen parts, because Intel's been able to depend on process node shrinks and new architectures pretty reliably. And, in this industry, reliably means "on a consistent timescale, without having to spend totally ludicrous amounts of money".

Now that it's a ~3 year effort to "reliably" shrink process nodes, instead of 2, Intel has a gap that they have to fill with new parts after they've already refactored the architecture. So the logical thing to do is to optimize the architecture, and make lots of really small changes (rather than larger, higher-level ones, which define a CPU's "architecture").

They haven't done this before mostly because they haven't had to, and because it was more effortful in the past than simply investing time and money elsewhere elsewhere. But the times, they are a-changin'...


With EUVL still in development and far from production ready, we're probably going to see an extended repeat of GPUs being stuck on 28nm and refining their design in the CPU realm.

I think 3 years for each process node shrink is going to be the new normal for a while. AMD and nVidia have taken forever because they don't own fabs like Intel does, and the cost for them to move to 16nm has been insane until pretty recently. Intel's issues, comparatively, seemed to have to do with scalability and yield more than anything else.
 
  • Like
Reactions: Soul_Est

EdZ

Virtual Realist
May 11, 2015
1,578
2,107
I think 3 years for each process node shrink is going to be the new normal for a while. AMD and nVidia have taken forever because they don't own fabs like Intel does, and the cost for them to move to 16nm has been insane until pretty recently. Intel's issues, comparatively, seemed to have to do with scalability and yield more than anything else.
It wasn't that the cost was insane, it was that nobody could offer them a 16nm fab that could produce large GPUs. Making a compact low-power SoC and a large high-power GPU are surprisingly different, even on a process with the same trade name.
 

Stevo_

Master of Cramming
Jul 2, 2015
449
304
Not surprising, all the fabs are pretty much hung up on lithography capability, there's no more easy half-node shrinks. Added to that tech like FinFET and the transistor size is very muddy with the different implementations at different fabs(reportedly one of Intels is a triangle shape), not to mention it almost doesn't pay performance to cost wise to make a single node jump anymore (mask set costs are very significant these days). Intel is still chewing on the Altera purchase which is taking much longer to integrate than planned but will keep their fabs utilized better when done which also helps in driving yields. TSMC is just getting to the point where they're letting smaller companies like us onto the 16nm fab from 28nm, so yields and consistency must be getting pretty good... finally.

Part of Nvidia's and AMD's problem of not moving onto smaller nodes more quickly is they may not have the volumes that a fab like TSMC will require you to have to get early adopter status. The FPGA, Qualcomm, Broadcom type cell phone and network volumes with many multiple tapeouts a year are usually first in line. TSMC wouldn't even let us download the design libraries until one of the big volume types stepped in.