All x86 processors since the '386 have implemented a four-level privilege hierarchy intended to prevent low-level code from contaminating high-level code. This is the kind of stuff complex operating systems do in software, yet the chips implement it entirely in hardware, no code required. It's a remarkable feature, but it requires several million transistors to make it work, and that means power.
The list goes on and on, but the features that the x86 architecture has accumulated over the past three decades don't always show up in the benchmark results. System-level functions help with operating-system code or with obscure corner cases, or they were added to enhance security. All of those add-ons make the chips bigger, hotter, and more expensive.
As ARM finds itself designing processors for ever more complex systems like multicore Android tablets, microservers, and 64-bit machines running hypervisors and multiple secure operating systems, it has to add in more big-boy features that it had originally omitted.
Virtual memory? Check. Security features? Check. Wide registers, floating-point extensions, pseudo-DSP instructions? Check, check, and check. Little by little, ARM is leaving behind RISC philosophy and becoming more like the complex processors it upended. The company has redefined its instruction set at least twice, sometimes sacrificing binary compatibility on the altar of performance, capability, or scalability.
Going with what they know
As mobile devices become more and more complex, does that leave an opening for Intel? Probably not, at least not beyond a certain point. For one thing, Intel is late to the party, and we've seen how inertia drives this business.
As it stands, Intel's mobile Atom processors are barely as good as the leading ARM-based alternatives, and the ARM army has an obvious head start when it comes to software, ecosystem, and experience pool. It's a tough sell to convince an engineering team to change its processor family, software, and development tools in order to buy a chip that's "barely as good" as the one they're already using.
By the same token, ARM in the PC business makes no sense at all, at least if you define "PC" as a system running Windows applications. People don't buy a PC for the lovely Windows user interface; they buy it for access to their existing PC applications. The ignominious fate of Windows RT has proven that.
ARM in servers makes a bit more sense, mostly because a server doesn't run very much shrink-wrapped code or have a user interface that anyone but its handlers will see. After all, Linux rules the data center, and Linux can run on ARM processors. Plus, there's increasing demand for low-power solutions in data centers, because over time the cost of power can outstrip the cost of the hardware itself.
But here again inertia rears its head. Most servers are x86-based and there's presently no compelling reason to switch. If ARM-based server chips manage to offer a significantly better price/power/performance ratio, they can probably make inroads. Certainly, some new server makers are eyeing ARM-based designs eagerly, if only because it sets them apart from their generic x86-based competitors. But it will be a tough slog.
Intel's manufacturing advantage
Intel is traditional smokestack industry: Everything from design to manufacturing to sales is done under one roof. ARM is more like a downtown architectural firm, doing white-collar work and licensing its blueprints to others. ARM doesn't make chips, and Intel doesn't collect royalties.
That means Intel chips come only from Intel, whereas ARM-based chips could come from any of a number of different vendors -- theoretically. In reality, every ARM-based chip is unique, and there are no second sources. That makes ARM processors every bit as vendor-specific as Intel's or anyone else's.