An anecdote regarding this bug that always cracks me up. My college roommate showed up with a shiny new pentium machine that year, and kept bragging about how awesome it was. We used some math software called Maple that was pretty intensive for PCs at the time, and he thought he was cool because he could do his homework on his PC instead of on one of the unix machines in the lab.
Except that he kept getting wrong answers on his homework.
And then he realized that when he did it on one of the unix machines, he got correct answers.
“some math software called Maple”. I still use a version of Maple that I bought in 1998. I found subsequent versions of it much harder to use and I've never found an open-source software that could do what it can do. I don't need anything fancy, just occasionally solve an equation or a system of equations, or maybe plot a simple function. That old copy of Maple continues to serve me extremely well.
I avoid anything with the "-py" suffix like the plague. It's pretty much a guarantee that anything I write today, will not run next year. I'd stick with maxima. With wxMaxima, it's actually quite nice.
The mention of Maple brings back vivid memories of freshmen year of college when the math department decided to use the software as part of instruction and no one understood how to use it. There was a near revolt by the students.
I've never used Maple for work, only for survival. We had a Calculus for Engineers textbook that was so riddled with errors I had to use a combination of the non-Engineering textbook and Maple to figure out wtf was going on. Bad textbook, bad prof, bad TAs and six other classes. What a mess. Those poor undergrads...
Because of the way you did your quote, we’ve switched from talking about the TA’s to the teachers themselves. This is sort of different, the teaching assistants don’t really have teaching goals and they usually aren’t part of the decision making process for picking what gets taught, at least where I’ve been.
Anyway, as far as “learn the tools they want to teach the students how to use” goes, I dunno, hard to say. I wouldn’t be that surprised to hear that some department head got the wrong-headed idea that students needed to learn more practical tools and skills, shit rolled downhill, and some professor got stuck with the job.
Usually professors aren’t expert tool users after all, they are there for theory stuff. Hopefully nobody is expecting to become a vscode expert by listening to some professor who uses eMacs or vim like a proper grey-beard.
A son and his father I know, both engineers, went to an aviation meet where on display was a radial aircraft engine, with cutaways. They spent a happy hour going through every bit of it to figure out what it did and how it worked. They paid to go there.
Reminds me of when I was able to climb aboard an old steam locomotive and spent a wonderful hour tracing all the pipes and tubes and controls and figuring out what they did.
A good engineering college doesn't actually teach engineering. They teach how to learn engineering. Then you teach yourself.
Amusing joke, but it actually is effectively called the 586 because the internal name is P5 and Penta from which Pentium is derived is 5.[1]
Incidentally, Pentium M to Intel Core through 16th gen Lunarrow Lake all identify as P6 ("Family 6") for 686 because they are all based off of the Pentium 3.
You’re missing the fact that Intel wanted to differentiate itself from the growing IA-32 clone chips from AMD and Cyrix. 586 couldn’t be trademarked, but Pentium could.
> Intel used the Pentium name instead of 586, because in 1991, it had lost a trademark dispute over the "386" trademark, when a judge ruled that the number was generic.
It’s not like the microscope is going to show “a lookup table” though. You need to know how it was implemented in silicon, how transistors are integrated into the silicon, etc to even start identifying the actual physical mistake.
I wonder what node generation doing this type of thing becomes impossible (due to features being too small to be visible with an optical microscope)? I would have guessed sometime before the first Pentium, but obviously not.
I think the limit for discernible features when using an optical microscope is somewhere around 200nm. This would put the limit somewhere around the 250nm node size, which was used around 1993-1998.
These things completely fly above my head. I wish I could understand them because it’s pretty cool, but I got lost at the very succinct SRT explanation and it went downhill from there. Still, die shots are always pretty to look at.
Older Intel CPUs were already using microcode. Intel went after NEC with a copyright case over 8086 microcode, and after AMD with a copyright case over 287/386/486 microcode:
I would totally believe the FDIV bug is why Intel went to a patchable microcode architecture however. See “Intel P6 Microcode Can Be Patched — Intel Discloses Details of Download Mechanism for Fixing CPU Bugs (1997)” https://news.ycombinator.com/item?id=35934367
Intel used microcode starting with the 8086. However, patchable microcode wasn't introduced until the Pentium Pro. The original purpose was for testing, being able to run special test microcode routines. But after the Pentium, Intel realized that being able to patch microcode was also good for fixing bugs in the field.
Being able to patch the microcode only solves part of the possible problems a CPU can have.
My guess -- and I hope you can confirm it at some point in the future -- is that more modern CPUs can patch other data structures as well. Perhaps the TLB walker state machine, perhaps some tables involved in computation (like the FDIV table), almost certainly some of the decoder machinery.
How does one make a patchable parallel multi-stage decoder is what I'd really like to know!
Mostly, you can turn off parts of the CPU (so called chicken bits). They are invaluable for validation, but they have also been frequently used for for fixing broken CPUs. Most recently AMD just recently turned off their loop buffer in Zen4:
https://chipsandcheese.com/p/amd-disables-zen-4s-loop-buffer
The table couldn't be fixed. But it can be bypassed.
The microcode update would need to disable the entire FDIV instruction and re-implement it without using any floating point hardware at all, at least for the problematic devisors. It would be as slow as the software workarounds for the FDIV bug (average penalty for random divisors was apparently 50 cycles).
The main advantage of a microcode update is that all FDIVs are automatically intercepted system-wide, while the software workarounds needed to somehow find and replace all FDIVs in the target software. Some did it by recompiling, others scanned for FDIV instructions in machine code and replaced them; Both approaches were problematic and self-modifying code would be hard to catch.
A microcode update "might" have allowed Intel to argue their way out of an extensive recall. But 50 cycles on average is a massive performance hit, FDIV takes 19 cycles for single-precision.
BTW, this microcode update would have killed performance in quake, which famously depended on floating point instructions (especially the expensive FDIV) running in parallel with integer instructions.
It's interesting that there's no "trap instruction/sequence" feature built into the CPU architecture. That would presumably be valuable for auditing and debugging, or to create a custom instruction by trapping an otherwise unused bit sequence.
Those undefined opcodes are more for testing your fault handler or for forcing a trap in an “unreachable” part of your code, not to implement custom instructions.
They can be used for anything you want. The CPU treats them as described in the document.
They're not fast, though. You probably don't want to run trapped fake instructions in tight loops, but you don't want that no matter how they're implemented.
Part of the reason why existing functionality is limited and nobody uses them to implement custom instructions, is that trapping is actually quite expensive.
The switch to kernel space and back is super expensive, but even if a CPU did implemente fast userspace trap handlers, it wouldn't be fast.
This division, using SRT loop with 2 bit output per iteration, perhaps would have already been microcoded - but using the lookup table as an accelerator. An alternative could use a simpler approach (e.g. 1-bit-iteration "non-restoring" division). Longer but still fitting into normal range.
But if they had understood possible aftermath of non-tested block they would have implemented two blocks, and switch to older one if misworking was detected.
Except that he kept getting wrong answers on his homework.
And then he realized that when he did it on one of the unix machines, he got correct answers.
And then a few months later he realized why ....