Arm, Intel, and the Innovator's Dilemma

[Written ]

There is a nice, simple narrative about innovation: the small, cheap thing can’t serve as a substitute for the big, expensive one, so the big company making the expensive one ignores the cheap thing. Meanwhile, the small company iterates until suddenly, the “cheap thing” has become better than the big, expensive one, and this destroys the big company.

I’ve heard this narrative applied to Intel, after Apple Silicon was released, and the M1 was actually really good. (Full disclosure: this website may advertise in the footer that it is “artisanally crafted in GVim,” but at the moment, that is technically being realized through MacVim on an M4 system.) Unfortunately, the pieces don’t quite fit together that way for me.


The first problem is Atom.

Intel dusted off the Pentium design, cranked some out on newer fabrication processes, and tried to sell them. The original Atom chips were 32-bit only, but at the time, so were Arm. Figures I have available suggest that Atom did cram its TDP number into a range competitive with a contemporary Cortex A9. Performance seems to have been in the same ballpark, maybe. The out-of-context charts I can find show quite a bit of variance (-50 to +50 percent advantage for Intel.)

Unfortunately, there are two problems with these power figures:

  1. They are invariably intended to be interpreted as a ‘maximum,’ not an idle power draw. In a real device, in someone’s pocket for hours a day, idle/sleep power matters.
  2. The 3W number for Intel is a “TDP”, and they are notoriously loose with their figures. I am not sure it was as bad in 2009 as it is now, but I would not be surprised if someone told me “That’s a 12W chip, you just couldn’t run it that way for long.” If that were the case, the ‘advantage’ is only on paper.

Play stupid games with “design” power, win stupid prizes!

I also think that part of Intel’s play was that “it runs unmodified1 x86 code,” but that just wasn’t all that attractive at the time. Systems that small were built for the device. Nobody was going to put full Windows XP on a phone, so it didn’t really matter that Atom could boot that. The flip side of this coin is when Atom processors ended up in netbooks, people thought they were cool, but after the novelty wore off… they were slow, cramped, and no fun. Netbooks ultimately failed as a market segment, moving upward to converge2 with actual laptops to make up for their faults.

Ironically, iPads would be introduced, then go on to run unmodified iPhone code. It was all Arm. Whatever advantage Intel thought it would reap from x86 compatibility, Apple collected for itself.


The next problem, in a couple of words, is: “everyone else.”

Apple shocked the world by launching a 64-bit iPhone. Apple kept bringing chips to production on updated specifications before anyone else in the Arm ecosystem. Apple launched Apple Silicon, as an Intel-competitive desktop CPU. Apple Silicon did it with a fifth of the power consumption.3

(If we want to take it a bit outside the realm of Arm specifically, Apple launched the first full-fledged desktop browser in a pocket form factor. Nobody had thought such a thing was possible until that point, either.)

It’s only about four or five years after the M1 was released in 2020 that we are seeing competitors in the market. Notably, the Snapdragon X Elite (2024) is the result of three years’ development after acquiring NuVia, Inc. It is really not clear to me that, without Apple’s efforts, anyone in the Arm ecosystem would have produced a laptop-focused Arm chip. AWS Graviton4 chips published to Geekbench are somewhere around two-thirds of the published Snapdragon X Elite results. I assume this is because of design constraints, but to compare oranges to oranges, their scores are also about four-fifths of Intel-based AWS instances.

In any case, the “best and fastest” non-Apple Arm CPUs are either going into phones, where Intel is not really present as a competitor, or they are going into servers, where they do not seem to be better than their direct competition. Or, there’s the Snapdragon X Elite, which was developed in response to Apple’s success.

It does not seem like Arm, in and of itself, is particularly disruptive here.


Meanwhile, there are quirks in Intel’s history that fail to fit the “big company dies of complacency” narrative. In the early 2000s, Intel started to lose the gigahertz battles to AMD. The “x86 compatible” market had always been slightly behind Intel, but then AMD grabbed the headlines.

Intel’s next move was the Netburst architecture, which put them back in the lead on clock speed. Once again, nobody could touch Intel on how many cycles per second their chips were able to run. They leaned into it, got bigger numbers, and seemed to be unstoppable.

There was just one problem with Netburst: Intel had sacrificed the actual processing of instructions to achieve those alleged speeds. According to benchmarks, it took about 1.5 GHz on the Pentium 4 to match the performance of the 1.0 GHz Pentium III or AMD Athlon. The whole thing reached its nadir with the Prescott core. Its prodigious heat output and lackluster performance earned it the nickname “Pres-Hot.”4

It seemed like it could be the end of Intel, if they couldn’t get back on track. But “get back on track,” they did. They returned to the P6 architecture that had underpinned the Pentium II and III, effectively rewinding Netburst, and soon turned it into the Core line/brand.

But that’s not all. While Netburst and Core played out, Intel had been struggling to market Itanium. These were all-new processors with a new, unproven architecture that would only support x86 code slowly, in emulation. Intel continued to insist that IA-64/Itanium was the future, and there would never be a 64-bit extension to the IA-32/Pentium line. Into this landscape, AMD released Opteron, which was exactly the latter: a 64-bit extension to 32-bit x86, capable of running x86 code at full speed.

Opteron itself was a server chip, but it came to desktops soon enough, as Athlon 64. Intel followed suit, producing their own AMD64 chips, and abandoning Itanium.

In both cases—Netburst and Itanium—Intel reversed their mistakes and returned to glory. “Big, old” companies were not supposed to be able to do this. The future is always uncertain, but these turnarounds suggest that Intel could, at the moment, still regroup and recover top-tier performance over Arm.


Maybe this next one is a growing pain; maybe we will tell ourselves a new story in five years. But for now, the next problem is “Windows on Arm.”

When Apple wants to change CPU architecture, they do it. It’s not a chicken-and-egg problem, because of the inevitability of it. Past transitions were made when the 680x0 and PowerPC lines were clearly no longer viable for the future. Intel isn’t in such a dire situation, but Apple had the iPhone’s A series chips and a history of architecture transitions to back them up. When M1 was proven to be competetive and not over-hyped, everyone believed Apple could do this for the long term. Nobody thought it was likely that they’d return to Intel in a couple of years.

Compare this to Microsoft’s position, where Arm is sort of an experimental, on-the-side deal. It is meant to coexist with, rather than fully replace, x86. This immediately leads to compromises. Reviewers wave their hands about possible unstated “compatibility issues” with “some” unidentified apps under x86 emulation on Arm devices. What’s the sales pitch, exactly? “You get superb battery life, but it may or may not do what you want, and you can’t find out until you have tried?” That’s ahead of the previous version with the “…and at best, it’s slower” caveat, but it still doesn’t sound too great.

With no clear mandate on changing architecture, what happens? Ordinary people are being given a choice between “Everything just works as it has for decades,” if they choose x86, and “You’re going to need to check everything for Arm support,” otherwise. Microsoft can build the most capable Surface Pro Ultra Elite laptop they want, but if there’s an Arm CPU inside, it brings additional complexity that x86 users can completely ignore.

In a world built around the expectations and needs of x86 users, is some more battery life—for otherwise similar devices—a crucial feature that’s going to move units?


Overall, then, “Arm disrupts” is an incomplete narrative. Arm could still become a disruptive innovator, but it isn’t quite there yet. It hasn’t pushed definitively past x86 and continued to open the gap. In particular, Intel is still able to throw a lot of cores at the high end, holding up the multiprocessing benchmarks.

Once the newcomer has completely improved on the incumbent and is expected to remain that way, the major transition will happen. This would be similar to the way amd64 (x86) pushed out the other architectures from the supercomputer space. Everyone serious thought the x86 was a toy, unworthy of consideration… until AMD and Intel🌎 arrived and crashed the party.


On a meta level, we definitely like simplifying things to fit a category or a narrative. Maybe it does (or will eventually) capture the broad strokes, but it also lets us quit looking at the messy chaos that happened along the way.

Maybe everything comes down to luck, much more than we want to admit.


  1. This probably made it slightly less competitive, as they would have to spend space on real mode, VM86 mode, and all their legacy instructions like rep stosb and loop. I realize the cost is technically microscopic, but we also saw the Pentium FDIV bug because they thought some entries in a lookup table could be left out for a tiny savings that would be worthwhile at scale. 

  2. The “ultrabook” segment that fell out of this merger was copied from Apple’s designs with the Macbook Air. 

  3. How realistic is that? I have a power strip that detects when a main device draws power, and switches on a bank of outlets accordingly. (Cutting power to peripherals when the computer sleeps reduces vampire consumption.) The M4 mini doesn’t normally draw enough power to register as “on”, even with the power strip set to netbook mode instead of PC/laptop mode. It only switches everything else on when the CPU is hard at work. 

  4. The negativity around Netburst stuck; I didn’t remember that there were later Netburst cores, only rediscovering them while researching this article.