Lots of comment here about Intel adopting RISC-V or Arm and what this means for x86. I think it's worth considering the commercial issues from Intel's (and AMD's) perspective.
Firstly, adopting Arm would mean giving their blessing to an architecture that is now competing with x86 head on in key markets. That would only accelerate the demise of x86. So unless they have absolutely world beating products available on day 1, I don't think that's going to happen.
Secondly, adopting either architecture means that they would remove a key part of the 'x86 moat' and open up competition from anyone else with deep pockets who can put together a top quality silicon design team.
So I think it's likely that ultimately - whatever ISA they adopt - they will look for any way they can to distinguish themselves from the competition and avoid the commoditisation of their products.
Finally, just to add that supporting RISC-V - which is clearly competing successfully with Arm at the low end at the moment - helps to weaken a competitor so might be seen as a shrewd commercial move irrespective of any longer term plans.
This. I wonder why so many seem so eager to engage in the fantasy that Intel (and AMD) are planning to jump ship to ARM or RISC-V.
Yes, ISA's matter, to a point. And yes, ARM64 and RISC-V are much closer to 'best practice' ISA design than x86 with all its legacy baggage. But enough better to make AMD&Intel throw away the x86 market and their position in that as explained in the parent comment? No effing way.
Intel has tried to move away from x86 three times already. They only stuck with it in the early 2000s because AMD64 forced their hand.
This chip shows why Intel would like RISC-V. The core ISA is RISC-V, but every other piece of the chip is proprietary and patented Intel stuff. They are far ahead of almost everyone in these areas, so they don't have much to fear in their current markets from the ISA itself.
Few companies have the ability, desire, or connections to spend billions designing a next-gen chip. China has thoroughly proved this. They have designed for x86, Alpha, MIPS, ARM, RISC-V, etc, but none of their designs were particularly good. For example, they recently released their Phytium D2000 chip. It was basically a clone of A72 with improvements, but chips and cheese analysis[0] showed that the supposed improvements actually resulted in a worse design.
If designing a good high-performance chip was down to just ISA, then everyone would be doing it.
Meanwhile, x86 patents for SSE2 and before have expired (with SSE3 expiring in 2023-4). Analysis of real-world code shows that only a couple percent use something beyond SSE3 and pretty much all of that has fallbacks for SSE2. There's already not much left to keep companies from designing x86-compatible chips (Apple's Rosetta x86 compatibility tracks this expiration exactly).
At the same time, x86 is incapable of competing in MCU and DSP markets and Intel's phone offerings were flatly rejected and only competitive when they had the huge advantage of being a couple fabrication nodes ahead of their competitors. Intel paid billions trying to make it happen, but never had many sales outside of the lemonade they made in the embedded market.
Intel and AMD would far rather have a non-proprietary solution like RISC-V win than a proprietary one like ARM.
I wouldn't say "move away", I would say "augment their product line".
What are the three? I can only think of two: IA64 and i960, but those weren't departures.
I worked on Itanium (and McKinely, and Madison). They were never intended for desktops. (In fact, back then there was still this notion of Desktop and Workstation, which is essentially dead today.)
The i960 was a fantastic CPU and I only know the wikipedia version of what happened to it, since it was before my time (well, they were producing it when I worked there, but I was ignorant of the climate). However, it was never a "move away from x86" product, it had great # of embedded customers. Again, never for desktops.
I think it's fair to say though it was intended to be a move away from the main PC microprocessor line (which you can argue traced a path through 8080 to 8086 and beyond ) and ultimately replace it at the high end - so in spirit similar to Itanium even if not an 'x86 replacement'.
x86 traces it's DNA all the way back to the 8008 which led to the extended 8080 which led to the backward compatible 8085 then 8086 and the x86 architecture.
8008 was almost a full decade before the launch of the iAPX 432.
The roots of x86 were in one of the first major ISAs ever created (and something like the 2nd or 3rd microprocessor architecture) which is pretty remarkable when you think about it.
I think the real reason is they're thinking about opening their foundry to external customers in a meaningful way.
A modern foundry is expected to have a suite of hard IP blocks to drop on a design to cover PPA sensitive blocks like CPUs someone would want, and blocks with analog bits like PHYs. Best way to ameliorate people's concerns is to have shipped a chip that runs with those blocks.
> Intel's phone offerings were flatly rejected and only competitive when they had the huge advantage of being a couple fabrication nodes ahead of their competitors.
I think there was a good amount of ecosystem issues there. Android x86 phones shipped and were ok, but too many apps shipped native code without an x86 flavor; I don't remember if Google had per-arch builds on Play Store yet, but those also cause issues because people pull those builds and host them on apkg sites, then users have problems when installing them on wrong arch phones.
Intel canceled the atom for phones lines days before Microsoft demoed Continuum, which would have been an obvious outlet for an x86 phone. Of course, Microsoft threw in the towel on WM10 before launch too, so maybe Intel wasn't willing to stick it out because they saw Microsoft was going to mess it up. In an alternate reality, the Lumia 950 would be a phone in your pocket and an x86 desktop running real apps on your desk, instead of stuck running app store apps and (pre-chromium) Edge only.
The important x86 patents are all expiring. Nothing would prevent a third party from recreating AVX using different instruction designs that would avoid those patents too.
The non-commodity stuff is all the interconnects, memory controllers, caches, etc. Having access to the Athlon XP or Pentium 4 cache, interconnect, or MC designs simply doesn't matter. Intel has these bits locked down already.
As the ISA is commodity, the only parts that matter are efficiency, compatibility, extensibility, and cost.
RISC-V allows them to penetrate new markets where x86 either can't compete or people don't believe it could compete because it is much more efficient at the low end. On the high-end, simplifying stuff in one area means you have the ability to increase the complexity and performance somewhere else.
On the compatibility front, Apple has already forced their hand by being compatible enough to offer a path to ARM.
x86 is not so extensible at this point. Lots of the best instruction encodings are wasted on stuff like BCD and even x86_64 has lots of legacy and extensibility issues. RISC-V not only solves this problem, but Intel is big enough to exert a lot of pressure on future standards.
Cost is a problem that isn't to be underestimated. ARM charges 1-3% per chip. That's something like 8-10% of gross margins. RISC-V means Intel can get a new ISA that is already being picked up by everyone (rather than spending billions on forcing a new one only to fail as they've already done).
In short, there are a lot of advantages and very few downsides to Intel making the switch.
Not that I expect it to happen any time soon, but Intel and AMD are proving with every release they have nothing for energy efficient machines. M1 battery is almost always the first thing people mention. And as M2 gets released, more people will get used/cheaper M1s. Companies release more aarch64-compatible software and even games every day.
At some point the x86 companies will either have to respond or see more and more people migrate to either Apple or other arm laptops (serious alternatives are not here yet, but a few producers are starting to experiment)
> AMD are proving with every release they have nothing for energy efficient machines
Every release AMD has done lately has two sides: side A is for the same compute as previously, you use less watts; side B is for the same watts, you get more compute; the third side is often oh yeah, you can pump a lot more watts (Zen4 almost doubled TDP on the high end parts, to compete with Intel's raised TDPs).
If you want an energy efficient AMD machine, you just have to limit the wattage. It may or may not get all the way to M1/M2 level of efficiency, but it's decent. Of course, lots of people are going to prefer performance, and it makes sense for AMD to allow that if system design can handle the power supply and cooling requirements. Apple gets to design their CPUs around an assumption that clock speed won't need to scale because cooling will not be sufficient for astronomical clocks, but Intel and AMD are in a competitive market where clock speed sells chips, so everything needs to scale. Arm's more relaxed memory model helps Apple as well.
Apple has been busy making their chips more efficient, while simultaneously putting beefier coolers & PSUs in their latest design. I believe Apple have enough leverage to raise their clock speed.
I forgot to add that there is the risk of 'Osborning'[1] x86 on day 1 of announcing a move to a new architecture as they effectively declare it a 'legacy' product.
But why not just jump and let momentum carry x86 like before? The problem with Itanium was largely that the ISA itself sucked to work with and SW/tools were not ready for prime time when 3p SW/HW vendors like Nvidia were expected to have ported to it.
It doesn't seem like ARM and maybe RISC-V will have the same issues, at leatin terms of magnitude so I don't really see why it matters.
I think long term they risk ending up like Canon/Nikon in the DSLR market when Sony came along with the new fangled mirrorless technology - sometimes you gotta disrupt yourself and skate to where the pick is going.
Largely agree. I'd be astonished of they don't have high performance RISC-V designs under development.
Problem is that the endpoint is much less attractive (for them) than where they are now and the transition will be very, very messy. At a time when the business is under strain for other reasons it's a risky move.
Should have done it a few years ago - when they had process lead - but hindsight is a wonderful thing!
Itanium was absolutely a plan to take out the competitors in the unix- and minicomputer market, where margins were much, much higher than in the generic x86 market. That worked. SGI, Digital, HP, Compaq/Tandem, they all fell.
It was not necessarily the plan to abandon the architecture, but once it was won, it also wasn't terribly important to keep going. Much like most corporate takeovers to this day.
Intel would have been happy to keep the market segmented for a few more years, but what happened instead was that the market vacuum was filled by Linux and x86 instead. That would likely have happened sooner or later anyway, but there you go.
It was a poor architecture though and if not by AMD64 it would have been killed by something else more in line with traditionnal high perf superscalar. Maybe even just Arm.
Even Microsoft directly shipped a PowerPC system running a modified NT kernel in the Xbox 360.
If Itanium continued to implode with no other alternatives, PowerPC would have been the most likely to pick up the slack. The main reason why it more or less failed was from a lack of volume to pay for leading edge R&D for the process side. Without AMD64, Intel's Itanium obsession combined with the mid aughts dennard scaling wall catching everyone with their pants down would have given a nice bit of breathing room for PowerPC to exceed x86-32.
They killed it at about the time they started working on AMD64 internally with AMD. Dave Cutler himself gave feedback on the pre silicon design; it was basically co designed with Microsoft. They kept PowerPC in public products until they had another way out. And to this day PowerPC support still exists internally in the NT kernel.
They also literally were shipping an NT derived kernel for Xbox 360 into the 2010s.
Between 1997 and the release of XBox in 2001 there was enough years of code changes (two major NT based releases), also XBox NT kernel was basically that, a stripped down kernel without any relation to Windows 2000 userspace.
And notice I said 360; they reimported the PowerPC support from mainline NT in 2005. And to this day they still have PowerPC support internally in mainline NT.
And that small bit doesn't address the core of what I'm saying, that PowerPC support would have seen even more support if the two options were that and Itanium.
Today not 'everyone' is using x86 - by a long margin - so it's a bit of a stretch to say that in a hypothetical alternative history an architecture that failed in the market would be utterly dominant.
If you exclude everything that does not go into your direction and make the hypothesis that a poor architecture would have risen and then would not have been replaced, you conclude that "everybody" would use it?
No people would just use 32 bit x86 and continue with that for many more years and move to SPARC/PowerPC for the few cases where you really need 64 bit.
People not just gone use really bad processors because they have no other options.
This was right in the time period of Windows Everywhere. Windows on MIPS, Alpha and so on. And for server workloads just using going to Unix is totally fine.
People would rather run server workloads on Unix rather then using windows with shitty expensive processors.
People that are unwilling to move to Unix very likely just stick around on 32bit instead.
That version of Windows died with NT 4.0, several years before Itanium was a product.
We were running Windows 2000 in production, alongside Aix, HP-UX and Solaris workloads across all our customers back in 1999 - 2003, before we got hit in the first .com startup crysis.
I think offering a RISC-V + full featured x86 option could be a pretty big differentiator during the transition. They'd have to be really sure of out competing competitors though.
Short of that, I expect ARM or RISC-V to become the dominant ISA on server and clients come 3-7 years and where do they go from there? Become the next PPC?
First, the whole point of RISC-V is that if you use it to create new processors, you do not have to pay any licensing fees to anyone. If you want to use the ARM or x86 or AMD64 instruction set in a new processor, you will have to pay a license fee to the owners of these instruction set (Arm, Intel or AMD respectively). Due to the cross-licensing agreement that Intel and AMD have with each other to license each others instruction set (x86 and AMD64), it has negligible financial impact on their profit margins.
Second, Intel and AMD's x86 and AMD64 are the dominant platform on server and desktop market today. ARM instruction set architecture (ISA) dominates the mobile platform and is only now competing with Intel and AMD on the desktop and server market. If Intel or AMD drop support for x86 and AMD64, and migrate to ARM, it will make ARM the dominant ISA on which all IoT, mobile, desktop and server softwares run. This is obviously not in Intel or AMD's interest. Migrating to RISC-V would mean they have to help promote a completely new ISA and help developers migrate their software to it. Doing so will also kill the x86 and AMD64 platform.
So unless RISC-V ISA actually offers some real technical advantage (like drastically lowering the power requirement and boosting computing performance) it really makes no sense for Intel and AMD to shift to it.
Note that the news here is not that Intel and SiFive have built a RISC-V chip but how SiFive (who have ventured into making RISC-V chips) has partnered with Intel Foundry Service to make the chips in Intel's fab. This is Intel diversifying to also make chips for others in its foundry like, Samsung and TSMC already do.
"SiFive (who have ventured into making RISC-V chips)"
SiFive don't make RISC-V chips, except in small volumes as a demonstration. Their business is licensing CPU cores to companies that do make chips.
"Horse Creek", as its naming style suggests, is an Intel product that uses licensed SiFive CPU cores. SiFive will use the chip to make high priced dev boards. We don't know yet who else will use it.
The apparent success of the project is likely to get other SiFive customers to, as you say, use Intel Foundry Services instead of the traditional TSMC or Samsung.
I don't know whether Intel is designing high performance RISC-V cores of their own. It's not unlikely. But there are also others announced to be providing cores to Intel Foundry Services including Rivos who are developing an M1-class RISC-V core (they have a number of Apple's core designers, including some of the founders of PA Semi who Apple bought to establish their CPU design team in the first place)
A way to mix x86 and ARM/RISC code/binaries either like transmeta or just having it nearby and having the OS deal with it via scheduler.
i.e. what if the M1 peo/ultra had x86 cores in addition to ARM? Woud that have made the transition easier from a SW perspective? Could Intel have implemented a translation layer better than Apple owing to IP constraints?
> Could Intel have implemented a translation layer better than Apple owing to IP constraints?
impossible.
rosetta 2 is for a relatively short transition period, not something companies like apple/intel would pour huge amount of $ into it. with such limited funding & expected life expectancy in mind, it is fair to say that rosetta 2 is already close to be perfect.
also, intel has a track record for producing software with horrible quality. it is vastly different from apple which is doing very good in a long list of software projects for decades. just look at the negative comments regarding Intel's most recent ARC video card software -
let's be crystal clear - Intel and Apple are not operating on the same level here, their market cap has a 20x gap for a very very good reason. we are talking about the resting & vesting company that pushed for about 5% per annual performance increase for their desktop products for like 8-10 years in a row.
China is moving to RISC-V to remove its dependency on amd64 and arm64, and thus to US sanctions since Intel/AMD and ARM are subject to US law.
Intel wants to keep the ability to serve the Chinese market, that means in the long terms having RISC-V offerings, hence the investment in SiFive.
Unlike Ottelini, Gelsinger is not a bean-counter who sold off StrongARM/XScale to Marvell and thus made Intel irrelevant to the ARM market, and by extension to mobile computing.
> It also integrates Intel’s DDR5 PHYs supporting 5600 MT/s rates
I had to look up [1] that MT/s is short for megatransfers (or million transfers) per second. Compared to specifying memory speed in Mhz, it better reflects that DDR doubles the amount of transfers per clock cycle.
Ahhh, that's why MT/s exists. All I need to know now is how many data bits are in a transfer and how many memory channels there are, and I'll be able to calculate how fast the memory interface is.
After apple silicon was released, I was wondering if intel and amd would respond with some arm socs down the line due to the seeming "end of road" for x86. But now what if they are intending to hold out on x86 long enough (~5-7 years) to be able to go all in on riscV?
An ARM based design would make sense, except unlike Apple they'd have to buy a license. Apple has a perpetual license so they don't have to pay a thing and can do what they want with it. Like the M1 processor. But Intel does not have that. So, developing risc v in the background without committing to it, yet, makes a lot of sense. It might actually give them a path to come up with an answer to the M1 that isn't just a slightly more efficient x86 processor.
Apple just demonstrated that changing CPU architecture is not that big of a deal; so the value of x86 compatibility isn't what it used to be. You can just emulate it and it's fine. Even for games apparently. So Intel, backing an architecture that is already starting to compete with arm that is free makes a lot of sense.
AMD has the same challenge. And despite Nvidia failing to buy ARM, it's pretty clear that their long term strategy is not going to be letting other companies supply CPUs but to provide a complete solution.
I think the recognition of “Apple showed that it’s not a big deal” would end similar to the one “The US showed that military invasions from superpowers into middle powers are not a big deal”.
The amount of raw effort that Apple put into making that transition _appear_ “no big deal” will be hard to adequately appreciate! It _noticeably_ affected software quality at least two MacOS versions prior (drop of 32-bit support and forcing all API accesses to go via their frameworks) and will have cost them hundreds of millions if not billions in engineering effort and unknowable amounts of lost sales in the mean time.
Yes it paid off. Obviously. But emulating their move, I’m not sure if there is even a single company able to do that.
The Mac isn't locked down like iOS, but there's still no expectation that software built for one OS major release will continue to work for the next.
If the Apple fandom wiki is correct, the 68k emulator for PPC was included in all PPC releases, but Rosetta was included in 10.4 and 10.5, optional in 10.6 and unsupported in 10.7; it's scope was more limited than the 68k emulator as well. I expect Rosetta 2 will have a similar limited lifetime.
> Apple has a perpetual license so they don't have to pay a thing and can do what they want with it.
"Everybody" knows that Apple has an ARM architectural license, but AFAIU the terms haven't been disclosed. Presumably they got a sweeter deal than other ARM architectural licensees when they got rid of their ownership in ARM ages ago, but, "don't have to pay a thing" and "can do whatever they want" sounds a bit too sweet to be true?
> So, developing risc v in the background without committing to it, yet, makes a lot of sense.
TBH, I think Intel's interest in RISC-V is more about hurting ARM in the embedded market than about planning to sunset x86.
Intel is demonstrating the capabilities of their fab services including the ip blocks they have available. This scifive core is no different in that respect from the pci controller or ddr5 memory controller in that respect. Many companies want to build custom silicon for various reasons (although the current value proposition seems to be ML related). Those companies will need many pieces to get data in and out of their custom solution, and a Linux capable cpu core to handle networking or a small amount of traditional compute is often necessary. Intel building a risc V core should not be seen in light of their traditional cpu business, it’s not even in the same league in terms of performance. It’s just a technology demonstrator to prove to companies that intel can actually deliver designs that integrate intel designed ip and regular fabless semiconductor designs. The fact that risc V is guaranteed press is just icing on the cake.
> Intel is demonstrating the capabilities of their fab services including the ip blocks they have available.
Oh yes, absolutely. (I was going to mention that earlier, but the edit timer had expired.)
But yes, splitting fab service into a separate business unit that is seriously open for 3rd parties seem to be a major strategic shift since Gelsinger took over the helm. And it probably makes sense, as TSMC et al have demonstrated the merchant fab model can work for the top end designs as well and the entire rest of the industry is moving towards that.
So in a way, unless Intel wants to be the odd man out with their own idiosyncratic workflows this is a route they must go down on.
Not quite.. Apple also had to modify ARM on the hardware side to support x64's stronger memory ordering. But RISC-V has options for both I think.
For an attempt at doing it without modifying the hardware see Microsoft's slow emulation attempt on their ARM version that was rejected by consumers and increased battery consumption much more.
TSO memory model is indeed a standard option on RISC-V, that no one had implemented yet as far as I know.
Software designed for the normal RISC-V memory model will work perfectly on a TSO machine, if perhaps a bit more slowly. Programs that depend on TSO semantics may be buggy on the standard RISC-V memory model (or on ARM too).
RISC-V also has a FENCE.TSO instruction that can be inserted as needed into software running on normal RISC-V. If you're going to use it a lot then you'd be better off implementing an actual TSO mode (not least because of code size). FENCE.TSO even works on (standards compliant) hardware that doesn't know about it, because unknown fences are supposed to be executed as FENCE RW,RW (the strongest fence), at some loss in efficiency.
Alibaba T-Head unfortunately didn't read this part of the spec when they designed the C906 and C910 cores, which give illegal instruction trap if they encounter an unknown fence such as FENCE.TSO. OpenSBI now does trap-and-emulate if necessary, but of course at another loss of efficiency. This bug affects the Allwinner D1 and Alibaba ICE SoC.
I don't know whether this errata in the C910 has been fixed in the TH1520 SoC in the Roma laptop (and other unannounced, cheaper, SBCs).
I think I remember hearing that Intel's current or recent past networking products are Arm based. I had always assumed Intel was interested in RISC-V as a possible replacement for the wide array of non-x86 cores in network, baseband and storage controller products they fab.
That doesn't make sense to me. Apple was starting from scratch (well, from PA Semi, which made Power not ARM), but Intel and AMD have been designing high-performance CPUs for decades, and already changed architecture (to AMD64) successfully. On top of that, both have experience with ARM processors.
It takes around 4 to 6 years to design a new processor even if you have experience with it.
A new competitive processor in an area where all you have is "some experience"... Well, I wouldn't hold my breath.
If (and that's a big if) AMD and Intel started looking into ARM seriously after Apple unveiled M1, I wouldn't expect any ARM processor out of them earlier than 2025-2026.
Funnily enough I'd expect Amazon to perform better in this space (Graviton has been in deployment since 2019, three years ago, and is now in its third iteration).
Never released. They did have an Opteron A1100 which was 3 years later than its orginal announcement date (released in 2017, announced for 2014)... and somehow I couldn't even find proper overview of its performance.
So AMD hasn't had much in the ARM department in the past 5-7 years.
But we don't know what they have unless they announce it.
My guess is that they kept an ARM Zen frontend working, internally. And they probably have a RISC-V frontend now, alongside many other projects. They are large enough to do so.
When they perceive they can launch a successful product, they do so. Otherwise, we never know of these efforts.
I am a M1 Max user and I would love to have Apple go back to Intel.
Sure, the battery life and not overheating is great, but you are limited to Arm based solutions, when installing linux VMs for example. Or gaming. Everyone seems to have forgotten how awesome it was to be able to do everything on one machine.
It is a compromise I live with, not something I prefer.
And at least half of Apple’s benefit comes from process technology, where the gap will be closing very soon, if for no other reason than the fact that after 2nm there is not much more room to grow with silicon, so even laggards will have time to catch up on process and yields.
The would has been running on x86 for so long it’s a waste to just decide to rework so many chunks of it.
Macs are not and never will be ideal for gaming. Apple execs don’t care about gaming, don’t understand it, are not interested in it, are not willing to do what is necessary to excel in gaming. Lots of people at Apple are working on game-adjacent technologies and their efforts are doomed because the execs sabotage every gaming opportunity eventually.
I am getting one of these https://dk.starlabs.systems/pages/desktops to go with my M1 to run x86 based VMs, docker and similar stuff. An extra expense, but hopefully a good combination.
> And at least half of Apple’s benefit comes from process technology…
Apple's performance-per-watt advantage is boosted by process improvements, but nothing I've read attributes anything close to half of that advantage to process. (References would be welcome.)
Is x86 at the end of the road? I don’t follow closely, but I impression is it’s been getting faster and faster ever since AMD stepped up and provided some competition with Ryzen.
Both AMD and Intel have released some truly impressive CPUs lately, and this month both of them are releasing their next-gen products (ryzen 7000 and raptor lake respectively, I believe). The new ryzen chips are looking very impressive, and if Intel’s announced numbers are to be believed, raptor lake is going to be pretty great.
Where they fall down is on perf/watt, not raw performance. However there’s a lot of things that go into that difference, not just ISA, so I’m not sure if anyone has really decided if x86 is fundamentally less efficient, or just currently less efficient due to current design choices and constraints.
> Where they fall down is on perf/watt, not raw performance
Raptor Lake is a pretty big jump in perf/watt. In just one generation they are claiming similar perf to Alder Lake at 250W but just 65W on Raptor Lake. AMD did a similar huge jump in efficiency last year with Ryzen 6000 for mobile
Intel x86 will make even greater leaps in the future. The eCores (where they added more this generation, but no (?) pCores) are as small as ARM cores and the saved die space can be used for chaches (Intel already increased caches again on Raptor) for performance and power efficiency (as Apple shows). Then there will be 5nm which is a big reason for Apples Perf/Watt performance.
No. AMD and Intel are the oligopolistic providers for an unbelievably vast software ecosystem that practically rules all computing outside embedded (and some legacy mainframes here and there), are they going to throw away that market position just because the cleaner encoding of ARM or RISC-V would save an estimated low single-digit % of decoding power [1]?
Top of the line x86 decoders are 6 wide (with limits of 64 aligned bytes on input, making average throughout around 4 per cycle as admitted by intel in their docs). Aarch64 can be decoded as wide as you wish thanks to fixed size. No solution to this other than a fixed length encoding.
>No solution to this other than a fixed length encoding.
This problem does not apply to RISC-V, where with the C extension you get either 32bit or 2x 16bit. The added complexity is negligible, to the point where if a chip has any cache or rom in it, using C becomes a net benefit in area and power.
ARMv8 AArch64 made a critical mistake in adopting a fixed 32bit opcode size. A mistake we can see in practice when looking at the L1 cache size that Apple M1 needed to compensate for poor code density.
L1 is never free. It is always very costly: Its size dictates area the cache takes, the latency of this cache, the clocks the cache itself can achieve (which in turns caps the speed of the CPU), and how much power the cache draws.
As you mentioned decoder width: There's Ascalon[0], a RISC-V microarchitecture that's 8-decode (like M1), and 10-issue, by Jim Keller's team at Tenstorrent. It isn't in the market yet, but is bound to be among the first RISC-V chips targeting very high performance.
Note that, at that size (8-decode implies lots of execution units, a relatively large design), the negligible overhead of C extension is invisible. There's only gains to be had.
C extension decode overhead would only apply in the comically impractical scenario of a core that has neither L1 Cache nor any ROM in the chip. Such a specialized chip would simply not implement C. Otherwise, it is a net win.
Sure, RISC-V is the wave of the future, but there's more to a hot chip than an open/free ISA. RISC-V is a few years out from directly competing w/ Intel, but it's advancing very quickly.
Also worth looking at what StarFive is doing. They seemed to avoid a few corporate missteps SiFive fell prey to (no. I'm not saying SiFive is dying. I'm saying StarFive was able to learn from SiFive's mistakes and seem to be growing even faster.)
The RISC-V community still feels very "academic" and seems to eschew people with real-world experience. Maybe that's for the better. Maybe it's better to ignore the things industry did wrong in the past. But If you're from the Arm or MIPS communities, it feels a tiny bit stuffy. Also, you need a Ph.D. to be taken seriously.
Most importantly... I would really love to buy one of these boards. But it sounds like that's in the distant future.
From investor calls (where they actually can’t lie, at least not without fines and losing their jobs), they’ve had test samples of meteor lake for awhile already, which uses Intel 4. They’ve even had successful power-on samples for their 20A node (maybe also 18A?). It sounds like Intel is finally making some advancements on their process roadmap. Of course we’ll have to wait and see if they can hit acceptable yields, or if 4 becomes the next Intel 7 debacle. (Hopefully not)
I think Intel’s current leadership is slowing righting the ship, it just takes time in this space. It seems like Intel is making the right moves for 5-10 years from now instead of trying to maximize short term returns.
I would argue that intel has been surprisingly successful even in the short term, in terms of CPUs.
They had the 10th gen where AMD was clearly better performing and more efficient, but they competed. They had 11th gen which was a failure.
But until recent price cuts, Alder Lake wins on performance and price against a lot of AMD chips. Alder lake is doing that at a serious process disadvantage vs AMD - alder lake is not TSMC but intel 7 which is their 10nm enhanced finfet process. That intel is even close to competing with AMD when so far behind on process is pretty amazing, a testament to their chip design.
If they get their manufacturing and research back under control in the medium to long term they're golden. But even without that, I bet they're making a whole lot more money per chip fabbing it themselves than AMD is buying TSMC.
I agree. They were helped somewhat by the pandemic and chip shortages from everyone else, but what could have been a disaster for Intel ended up being only a small blemish. If Intel gets back on track AMD is back in the position of chasing Intel in the cpu space and Nvidia in the GPU space. Worse for them is if Intel starts being a serious contender in GPUs.
The size of this chip at 4 square millimeters is pretty small, it probably makes for a good test chip before trying to tape out chonkier chips that would have a much higher fault rate.
RISC-V is an ISA free from toxic IP "à la" mpeg/hdmi/arm/x86 (only barbaric countries where this IP is legal though).
"Write RISC-V assembly, run everywhere..."
It seems they are cheeky people trying to link RISC-V with the current tensions between the US and China. China saw the opportunity of RISC-V and it is pushing hard (like literaly the entire world should). But RISC-V is a US initiative, with the obsvious intention to become a international standard for interoperability of CPUs as the assembly level, that extremely stable in time. India has also RISC-V CPUs, and if everything goes well for RISC-V, its spread should reach more and more CPU designers all around the world further in time. But ARM and x86_64 licence holders (where this licence is legal) may try to torpedo it (probably from the shadows ofc).
Having commercial developments on RISC-V helps the whole ecosystem and makes it possible to eventually have open cores that can run the standard software.
So its kind of like Linux could eventually run much of the software developed for unix and took most of that market.
Currently its simply not possible that an open core can actually compete against x86 or ARM.
"The SoC integrates Intel’s own PCIe 5.0 PHY with x8 lanes along with Synopsys PCIe 5 Controller. It also integrates Intel’s DDR5 PHYs supporting 5600 MT/s rates along with Cadence’s memory controller."
Naive question: why didn't they just use Intel memory controllers and Intel PCI controllers?
Meh. Good for Intel. Frankly it's souring to me to see a fresh core with the same usual suspects providing the usual IP blocks though. Cores are easy. But all the un-core is ridiculously propeietary & ultra far from libre here, and in a lot of RISC-V.
The M1 was released in 2020 and AMD and Intel are still unable to compete in performance per watt. I can't believe that Intel and AMD have their current market cap. Investors should know by now that they make gasoline cars and Apple makes EV's. They are that far ahead.
They are completely different markets, the M1 and M2 are part of Apple's consumer devices, mostly premium notebooks running macOS. That's a niche market even compared to premium notebooks running Windows.
Net revenue is where Apple usually shines. In mobile I don't know the current situation, but for a long time Android had much higher adoption while Apple capturing more than 100% of the profit (Android phones were losing money).
300 million laptops are sold each year, less than 30 million of those are made by Apple, and they start at $1200. I'm just guessing but I would expect a good 70 million of those 270 million laptops to be sold at premium between the ultrabooks, gaming and productivity devices. GeForce high performance laptops were the fastest-growing category last year since gamers and creators (with the exception of designers) both tend to favor GeForce RTX.
Performance per watt is something no one cares about. Laptops are a tiny market compared to servers. I'm pretty sure desktops are even a larger market. I'd rather have a beefy desktop PC than a laptop that's fundamentally handicapped by cooling problems.
The only people who don't care about performance per Watt are those who have only one computer with a handful of cores and it is running off mains power.
People running from batteries care a lot about performance per Watt.
So do people running thousands or tens of thousands of computers in a warehouse. Not only does the electricity to run the computers cost more than the computers themselves, but it also costs a lot of money to build and run the cooling systems.
Intel's 10nm mistake was trying too many new things at one time. Now that they've fixed that, they've not only fixed 10nm, but also fixed problems other fabs had pushed off until later, so catching up will take a lot less time.
No, it was the opposite. Getting stuck at 10nm was actually due to being unwilling to do new things. They refused to transition to extreme UV lithography. Gelsinger himself has admitted this in interviews.
Firstly, adopting Arm would mean giving their blessing to an architecture that is now competing with x86 head on in key markets. That would only accelerate the demise of x86. So unless they have absolutely world beating products available on day 1, I don't think that's going to happen.
Secondly, adopting either architecture means that they would remove a key part of the 'x86 moat' and open up competition from anyone else with deep pockets who can put together a top quality silicon design team.
So I think it's likely that ultimately - whatever ISA they adopt - they will look for any way they can to distinguish themselves from the competition and avoid the commoditisation of their products.
Finally, just to add that supporting RISC-V - which is clearly competing successfully with Arm at the low end at the moment - helps to weaken a competitor so might be seen as a shrewd commercial move irrespective of any longer term plans.