Skip to main content


abandon x86 forever! Spectre, academic paper from 1995 w/ link

here is the 1995 paper that the #Spectre paper published in 2019 cites - and don't forget the research was funded by the NSA:

An in-depth analysis of the 80x86 processor families identifies architectural properties that may have unexpected, and undesirable, results in secure computer systems. In addition, reported implementation errors in some processor versions render them undesirable for secure systems because of potential security and reliability problems. In this paper, we discuss the imbalance in scrutiny for hardware protection mechanisms relative to software, and why this imbalance is increasingly difficult to justify as hardware complexity increases. We illustrate this difficulty with examples of architectural subtleties and reported implementation errors.


citeseerx.ist.psu.edu/viewdoc/…

Sibert, O., Porras, P. A., & Lindell, R. (1995, May). The intel 80x86 processor architecture: pitfalls for secure systems. In Proceedings 1995 IEEE Symposium on Security and Privacy (pp. 211-222). IEEE.

#infosec

in reply to theruran 🌐🏴

re: abandon x86 forever! Spectre, academic paper from 1995 w/ link
Well in a way this is why things which need proper security guarantees depend on a separated computer, like a smartcard.
in reply to Haelwenn /элвэн/

re: abandon x86 forever! Spectre, academic paper from 1995 w/ link
But on the upside, we got Doom running fast, so the Pentium was definitely worth it.
in reply to theruran 🌐🏴

abandon x86 forever! Spectre, academic paper from 1995 w/ link
Spectre isn't x86-specific. It's been demonstrated on ARM and POWER architectures, and I almost guarantee that RISC-V isn't immune either.
in reply to Vertigo #$FF

abandon x86 forever! Spectre, academic paper from 1995 w/ link
But the RISC-V ISA doesn't require branch prediction. What's the prevalence of prediction in real processors, does anyone know?
in reply to Kartik Agaram

abandon x86 forever! Spectre, academic paper from 1995 w/ link

Neither does x86 or POWER.

Most RISC-V processors have branch prediction.

And the issue isn't even branch prediction. The issue is predicting branches across a privilege boundary. Which most processors do because the ability to predict when you're going to make a system call helps amortize the cost of making a kernel call.

in reply to Kartik Agaram

abandon x86 forever! Spectre, academic paper from 1995 w/ link
Absolutely everything more capable than a microcontroller will have branch prediction. It’s that or accept a multi-cycle pipeline stall on every conditional branch. I’m not exaggerating when I say that it is the second most effective hardware performance hack in the toolbox (after memory caches).
in reply to Zack Weinberg

abandon x86 forever! Spectre, academic paper from 1995 w/ link
You wanna get rid of branch prediction, find a way to send a yes-or-no signal four clock cycles backwards in time. That’s not a joke, that’s the exact thing it would take.
in reply to Zack Weinberg

abandon x86 forever! Spectre, academic paper from 1995 w/ link

Computers are a million times faster than 20 years ago. I'll take 250,000 times faster?

To clarify my original comment, I was asking only about RISC-V processors.

in reply to Kartik Agaram

abandon x86 forever! Spectre, academic paper from 1995 w/ link
Actual performance hit will be more like two orders of magnitude. Seriously I would be surprised to see a production RISC-V *without* it.
in reply to Zack Weinberg

abandon x86 forever! Spectre, academic paper from 1995 w/ link

Hmm, lemme be more specific. Not having a branch predictor is not a big deal as long as your CPU core is both β€œin-order” and β€œsingle issue”. The performance hit in that case is 4*(fraction of instructions executed that are conditional branches), and conditional branches are usually about a tenth of all instructions, so you can see that this is livable.

In-order single issue is the design used for microcontrollers (because it’s small and power efficient) and research designs where speed isn’t the subject of the research (because it’s way less Verilog to write).

in reply to Zack Weinberg

abandon x86 forever! Spectre, academic paper from 1995 w/ link
If you are building a CPU to be used in any context where speed is at least one of the design goals, though, nowadays it would be silly *not* to include multiple issue, because additional execution units are cheap and easy to add. Out-of-order execution can be a big ball of hair, but the simpler versions (e.g. β€œscoreboarding”) aren’t too bad, and it really helps keep those execution units busy.
in reply to Zack Weinberg

abandon x86 forever! Spectre, academic paper from 1995 w/ link
But the thing is, as soon as you have any amount of instruction-level parallelism going on, Amdahl’s law kicks in and you *will* be bottlenecked on whatever you *can’t* parallelize. And the #1 thing that can’t be parallelized at the instruction level is … conditional branches.
in reply to Zack Weinberg

abandon x86 forever! Spectre, academic paper from 1995 w/ link
And that’s where my estimate of a 100x perf hit for not having branch prediction comes from β€” you have all this ILP machinery on your chip but most of it is spending 99 cycles out of every 100 idle because the instruction fetcher hit another conditional branch and has to wait for the entire dependency chain leading into the branch to resolve.
in reply to Vertigo #$FF

re: abandon x86 forever! Spectre, academic paper from 1995 w/ link
Yeah, there are better security-related reasons to abandon x86. Like pointer authentication / Clang’s support for shadow call stacks on arm. or the lack of privsep in privileged subsystems like Intel ME and AMD Secure Technology.
⇧