Hacker Newsnew | past | comments | ask | show | jobs | submit | ognyankulev's commentslogin

The fact that the long article misses to make the historical/continuation link to MessagePack is by itself a red flag signalling a CBOR ad.

Edit: OK, actually there is a separate page for alternatives: https://cborbook.com/introduction/cbor_vs_the_other_guys.htm...


Notably missing is a comparison to Cap'n Proto, which to me feels like the best set of tradeoffs for more binary interchange needs.

I honestly wonder sometimes if it's held back by the name— I love the campiness of it, but I feel like it could be a barrier to being taken seriously in some environments.


Doesn't Cap'n Proto require the receiver to know the types for proper decoding? This wouldn't entirely disqualify it from comparison, since e.g. protobufs are that way as well, but they make it less interesting for comparing to CBOR, which is type-tagged.


There's quite a few formats that are self-describing already, so having a format that can skip the type and key tagging for that extra little bit of compactness and decoding efficiency is a unique selling point.

There's also nothing stopping you from serializing unstructured data using an array of key/value structs, with a union for the value to allow for different value types (int/float/string/object/etc), although it probably wouldn't be as efficient as something like CBOR for that purpose. It could make sense if most of the data is well-defined but you want to add additional properties/metadata.

Many languages take unstructured data like JSON and parse them into a strongly-typed class (throwing validation errors if it doesn't map correctly) anyways, so having a predefined schema is not entirely a bad thing. It does make you think a bit harder about backwards-compatibility and versioning. It also probably works better when you own the code for both the sender and receiver, rather than for a format that anyone can use.

Finally, maybe not a practical thing and something that I've never seen used in practice: in theory you could send a copy of the schema definition as a preamble to the data. If you're sending 10000 records and they all have the same fields in the same order, why waste bits/bytes tagging the key name and type for every record, when you could send a header describing the struct layout. Or if it's a large schema, you could request it from the server on demand, using an id/version/hash to check if you already have it.

In practice though, 1) you probably need to map the unknown/foreign schema into your own objects anyways, and 2) most people would just zlib compress the stream to get rid of repeated key names and call it a day. But the optimizer in me says why burn all those CPU cycles decompressing and decoding the same field names over and over. CBOR could have easily added optional support for a dictionary of key strings to the header, for applications where the keys are known ahead of time, for example. (My guess is that they didn't because it would be harder for extremely-resource-constrained microcontrollers to implement).


> I feel like it could be a barrier to being taken seriously in some environments.

Working as intended. ;)


In all seriousness: I develop Cap'n Proto to serve my own projects that use it, such at the Cloudflare Workers runtime. It is actually not my goal to see Cap'n Proto itself adopted widely. I mean, don't get me wrong, it'd be cool, but it isn't really a net benefit to me personally: maybe people will contribute useful features, but mostly they will probably just want me to review their PRs adding features I don't care about, or worse, demand I fix things I don't care about, and that's just unpaid labor for me. So mostly I'm happy with them not.

It's entirely possible that this will change in the future: like maybe we'll decide that Cap'n Proto should be a public-facing part of the Cloudflare Workers platform (rather than just an implementation detail, as it is today), in which case adoption would then benefit Workers and thus me. At the moment, though, that's not the plan.

In any case, if there's some company that fancies themselves Too Serious to use a technology with such a silly name and web site, I am perfectly happy for them to not use it! :)


Ha, I wondered if you'd comment. Yeah that's a reasonable take.

I think for me it would be less about locking out stodgy, boring companies, and perhaps instead it being an issue for emerging platforms that are themselves concerned with the optics of being "taken seriously". I'm specifically in the robotics space, and over the past ten years ROS has been basically rewritten to be based around DDS, and I know during the evaluation process for that there were prototypes kicked around that would have been more webtech type stuff, things like 0mq, protobufs, etc. In the end the decision for DDS was made on technical merits, but I still suspect that it being a thing that had preexisting traction in aerospace and especially NASA influenced that.


man I love HN lol


I was hoping Eclipse AsciiDoc would already deliver stable spec, TCK and (new) reference implementation: https://asciidoc-wg.eclipse.org/projects/


I was expecting it to be Yet Another Eclipse Abandonware but https://gitlab.eclipse.org/eclipse/asciidoc-lang/asciidoc-la... has commits from 5 hours ago, and https://gitlab.eclipse.org/eclipse/asciidoc-lang/asciidoc-tc... was 10 months ago, which isn't nothing. But yikes they are not doing themselves any discoverability favors by having almost every meaningful looking link on https://projects.eclipse.org/projects/asciidoc.asciidoc-lang point to itself


I was hoping it's NIST ECDSA P-256 (algo 13) for smaller DNS packets, instead of what they did with continuing with RSA 2048 (algo 8).


Most of the big TLDs have already converted to algo 13 -- .org is still lingering on algo 8, but .com, .net, .edu, .gov have all converted, so a lot of the DNS traffic is using smaller signatures already.

Changing the algorithm for the root is being studied - see for instance https://lists.icann.org/hyperkitty/list/ksk-rollover@icann.o... ; I wouldn't be surprised to see an algo change as part of the next root key rollover.


My guess is they did that to be compatible with FIPS 140-2.

FIPS 140-3 allows ECDSA, but isn't widely deployed yet (among sites required to comply), so using ECDSA would probably cause issues for government organizations that need to use FIPS and DNSSEC.


Nah. Changing algorithms is a bigger deal than rolling the key. They want the make sure rolling the key is a non-event before taking on changing algs. Changing the algorithm is being discussed however.


Renaming to "TheAI" would match their ambition for AGI.


Drop the "The", just "AI"


Altman Intelligence Inc.


Altman Inc seems sufficient enough


It's there, in Shared bindings: "Alt+L lists the contents of the current directory, unless the cursor is over a directory argument, in which case the contents of that directory will be listed."


Ah, it was an L, I thought it was an I (eye). Thanks


I can confidently say that Elon's X won't become Japan's WeChat.


An easy bet to make, given that WeChat is Chinese.


The commenter meant a Japanese version of WeChat.


The book that includes the results from these slides has broader scope, and also can be downloaded for free from arXiv: https://www.maths.ed.ac.uk/~tl/ed/

"The starting point is the connection between diversity and entropy. We will discover:

• how Shannon entropy, originally defined for communications engineering, can also be understood through biological diversity (Chapter 2);

• how deformations of Shannon entropy express a spectrum of viewpoints on the meaning of biodiversity (Chapter 4);

• how these deformations provably provide the only reasonable abundance-based measures of diversity (Chapter 7);

• how to derive such results from characterization theorems for the power means, of which we prove several, some new (Chapters 5 and 9).

Complementing the classical techniques of these proofs is a large-scale categorical programme, which has produced both new mathematics and new measures of diversity now used in scientific applications. For example, we will find: [...]"

"The question of how to quantify diversity is far more mathematically profound than is generally appreciated. This book makes the case that the theory of diversity measurement is fertile soil for new mathematics, just as much as the neighbouring but far more thoroughly worked field of information theory"


Ah I was wondering about that. Their formula look suspiciously like the definition of Renyi entropy.

I'm not too sure where the category theoretical stuff enters though. They mention that metric spaces have a magnitude, but their end result looks more like a channel capacity (with the confusion matrix being the probability to confuse one species with another). Which, you know, makes sense, if you've got 'N' signals but they're so easily confused with one another that you can only send 'n' signals worth of data then your channels are not too diverse.

They do mention that this is equivalent to some modified version of the category theoretical heuristic, but is that really interesting? The link to Euler characteristic is intriguing, but from the way they end up at their final definition I'm not sure if metric spaces are really the natural context to talk about these things. It almost feels like they've stepped over an enriched category that would provide a more natural fit.


Metric spaces are enriched categories. They are enriched over the positive reals. The 'hom' between a pair of points is then simply a number: their distance.


And, these non-negative real numbers, which are these homs, are “hom objects”, so regarded as objects in “the category with as objects the non-negative real numbers, and as morphisms, the ‘being greater than or equal to’ “ ? Is that right?

So, I guess, (\R_{>= 0}, >=, +, 0) is like, a monoidal category with + as the monoidal operation?

So like, for x,y,z in the metric space, the

well, from hom(x,y) and hom(y,z) I guess the idea is there is a designated composition morphism

from hom(x,y) monoidalProduct hom(y,z) to hom(x,z)

which is specifically,

hom(x,y)+hom(y,z) >= hom(x,z)

(I said designated, but there is only the one, which is just the fact above.)

I.e. d(x,y)+d(y,z) >= d(x,z)

(Note: I didn’t manage to “just guess” this. I’ve seen it before, and was thinking it through as part of remembering how the idea worked. I am commenting this to both check my understanding in case I’m wrong, and to (assuming I’m remembering the idea correctly) provide an elaboration on what you said for anyone who might want more detail.)


> are “hom objects”, so regarded as objects in “the category with as objects the non-negative real numbers, and as morphisms, the ‘being greater than or equal to’ “ ?

This works, but it's not quite what you want in most cases. There's a lot of stuff that requires you to enrich over a closed category, so instead we define `Hom(a,b)` to be `max(b - a, 0)` (which you can very roughly think of as replacing the mere proposition `a < b` with its "witnesses"). See https://www.emis.de/journals/TAC/reprints/articles/1/tr1.pdf for more.


Indeed they are. I'm saying it may not be the right context in this case.

At least what they seem to be doing has little to do with metrics, and a lot more to do with probability distributions.


It's not clear what you're seeking. Probabilities appear because the magnitude of a space is a way of 'measuring' it -- and thus magnitude is closely related to entropy. Of course, you can follow your nose and find your way beyond mere spaces, and this may lead you to the notion of 'magnitude homology' [1]. But it's not clear that this generalization is the best way to introduce the idea of magnitude to ecology.

[1] https://arxiv.org/abs/1711.00802


Why not define ecological diversity as number of distinct biological species living in the area?


This is precisely the question answered by the OP. The answer is, "because there is a whole spectrum of things you might mean by 'diversity', of which 'number of distinct species' is only one extremum".


And also, I assume, because the concept of "species" isn't all that well defined?


It is well defined: a group of living organisms consisting of similar individuals capable of exchanging genes or interbreeding


Ring species make your definition non-transitive. The same with species that can interbreed but exhibit hybrid breakdown.


I invite you to examine the notes of the international ornithological congress... The difference between species and subspecies is quite subtle, and subject to interpretation, because no one is really going to do the experiment to find out if two individuals of geographically district populations can actually still interbreed.


So if you have a few grams of soil and want to know how many species of micro organisms are in there, you're setting them up with dates to see which ones will end up breeding?


One of a number of definitions. It is one that allows lions and tigers to be the same species.


Does he provide an example of other definition of diversity that makes sense in biological context?


Yes, the two extremes are captured by the common metrics of "species richness" which is the pure "how many unique species are there", and "species evenness", which depends on how evenly distributed the species are. A community in which 99% of individuals are species A and the remaining 1% are from species B-G is exactly as species rich as a community in which there are equal numbers of individuals of each species, but it is much less even (and therefore, under one extreme of diversity, less diverse). In different contexts and for different ecological questions, these two different versions of diversity can matter more or less, and there are metrics which take both into account, but this is a fully generalized solution which shows you relative diversity along the entire spectrum from "all I care about is richness" to "all I care about is evenness".

-edit- by the way, since it may not be obvious to everyone, the reason why an ecologist might care bout evenness is because extremely rare species are often not very important to the wider community. From an ecological function perspective, there is very little difference between my above example of the 99%/1% community and a community that is 100% species A. So an community with two, equally populous species might have more functional diversity than a community with one very abundant species and several more, very rare species.


But why "newest" Debian release is 9.x (Stretch) released in 2017 (current alias "oldoldstable") ?


And the latest Ubuntu appears to be bionic (18.04).


Instruction interpreter is nicely human-readable: https://github.com/LekKit/RVVM/blob/staging/src/cpu/riscv_i....


The RISC-V isa is specifically designed to be nice and regular and easy to decode, and work with, which definitely shows here (also in the RTL code if you look at some of the well designed RISC-V cores)

Of course, bot x86 and Arm started like that as well - but after 20+ years of evolution, they have to drag along a lot of history. (and one never really takes things away from an ISA, you only add new features, and at best deprecate old ones).

That said, with the extensibility of the ISA in mind as a design principle, I do have good hopes, RISC-V will stand the test of time reasonable well in that sense...


The ARM64 ISA does not have 20+ years of evolution as it was announced in 2011. It’s essentially a brand new ISA and has cast off almost all of the legacy of previous Arm ISAs.


This seems right at first; however, about half of the armv8 spec is taken up by a copy of the armv7 spec and info about interop between the two ISAs. So armv8 isa is considerably constrained not only by the interop requirements, whereby an armv8 OS/hypervisor must be able to control the environment of an armv7 process/OS, but also by the need for implementing both ISAs without excessive duplication of silicon. For example, an actual v8+v7 implementation must surely have a single pipeline supporting both ISAs.


It seems notable to me that Aarch64 is the only general purpose "clean sheet" ISA designed after 1990 (after POWER) that has condition codes. This seems like a prime example of something constrained by 32 bit ARM compatibility -- both porting software, and in the shared pipeline in CPUs that implement both.

On the gripping hand, there are a number of cores that implement ONLY the 64 bit ISA, starting from ThunderX, to Apple's M1/M2, to the latest ARM cores found in for example the Snapdragon 8 Gen 1/2 phone SoCs -- from memory only one of the three core types in those SoCs can run 32 bit code. ARM has said all their future ARMv9 cores will not have 32 bit compatibility.


I think the key test is whether significant features that were seen as problematic in earlier versions have indeed survived. AFAIK almost all were fixed.

Patterson and Henessy comment as to how different (better) ARM64 is when compared to previous versions!


>Of course, bot x86 and Arm started like that as well - but after 20+ years of evolution, they have to drag along a lot of history. (and one never really takes things away from an ISA, you only add new features, and at best deprecate old ones).

Feature-wise, RISC-V is already about on par and has managed not to become a mess.

Furthermore, unlike x86, ARM has broken binary compatibility several times in the past, and yet failed to use the chance to do anything else than minor adjustments.

RISC-V is in a really good position.


Pretty sure ARM has never broken binary compatibility, even across ISA expansions (16→32 bit, 32→64 bit).


ARM has broken binary compatibility many many times.

The first was probably when they took the condition codes out of the hi bits of the PC into their own register, to allow 32 bit addressing instead of 26 bit addressing, thus nuking all the software that thought (as it was encouraged to!) that saving and restoring the PC also saved and restored condition codes.

After Thumb mode was introduced they've waffled over whether you must use only BX to change modes, or whether any instruction that writes to PC (mov, add, pop ...) is ok.

Thumb only CPUs such as Cortex M3/M4/M7 can't run code from e.g. ARM7TDMI. The Thumb mode stuff will work, but it's not a complete ISA. On ARM7TDMI you have to switch to ARM mode for many things, while on ARMv7 you need additional instructions that don't exist on ARM7TDMI. CM0 adds only the essential operations to Thumb1, while M3/M4/M7 add basically a complete re-encoding of ARM mode -- minus conditional execution on every instruction.

In Thumb mode on ARM7DTMI (and successors) a 4 byte `BL` instruction is actually two 2-byte instructions run one after the other. You can separate them with arbitrary other instructions between them, as long as LR is not touched. In ARMv7 `BL` looks the same, but you can no longer split the two 2-byte halves -- it is now actually a 4-byte instruction, and other 4-byte instructions exist using the same initial 2 bytes.

And of course Aarch64 is utterly incompatible with any of the 32 bit ISAs. Early 64 bit cores from ARM also ran 32 bit code. At first they could boot 32 bit OSes, then only 64 bit OSes but they could run 32 bit user code. The latest ARM cores run only 64 bit code, and ARM has said no future 64 bit cores will run 32 bit code at all. Other 64 bit only cores also exist, including the ThunderX (9 years ago!) and Apple's M1 and M2 as well as all iPhones starting from the iPhone 8.


ARMv8 aarch32 is backwards compatible with v5-v7 but aarch64 mode cannot run aarch32 code of any kind. ARMv8 makes supporting aarch32 mode optional too and so it really is a hard break.


ARMv8 AArch64 is pretty different from earlier ARMs and is probably more performant than the overly academic RISC-V.


A while back, I read a Sun patent on not implementing some instructions and emulating them in the kernel's illegal operation trap handler. The whole patent seemed obvious to me, but I'm glad that it was patented and now expired, providing obvious prior art in any attempts to patent it today.

For MIT's 6.004 "Beta" processor loosely based on the DEC Alpha AXP, our test cases ran with a minimal kernel that would trap and emulate multiply, divide, etc. instructions using shifts and adds/subtracts, so we could implement more simple ALUs and still test the full instruction set.

In any case, particularly in the world of hypervisors, it doesn't seem too hard to deprecate an instruction and stop implementing it in hardware, and push that complexity into firmware. As long as the CPU covers the Popek and Goldberg virtualization requirements, hypervisors could be nested, and the firmware could implement a lowest level hypervisor that handles unimplemented instructions.

More generally, I wish ARM64, RISC-V, and other modern ISAs had taken DEC Alpha AXP's idea of restricting all of the privileged instructions to the firmware (PALCode in the Alpha's case) and basically implementing a single-tenant hypervisor in the firmware. The OS kernel always used an upcall instruction to the hypervisor/firmware to perform privileged operations. In other words, the OS kernel for Alpha was always paravirtualized. (UNSW's L4/Alpha microkernel was actually implemented in PALCode, so in that case, the L4 microkernel was the firmware-implemented hypervisor and the L4 syscalls were upcalls to the firmware.) As it stands, hypervisors need to both implement upcalls for efficiency and also implement trap-and-emulate functionality for OS kernels that aren't hypervisor-aware. The trap-and-emulate portions of the code are both lower performance and more complicated than the upcall handlers. Both hypervisors and OS kernels would be simpler if the platform guaranteed a hypervisor is always present.

Always having a firmware hypervisor also allows pushing even more complexity out of hardware into the firmware. The Alpha had a single privilege bit indicating if it was currently running in firmware/hypervisor (PALCode) mode, and the firmware could emulate an arbitrary number of privilege levels/rings. The Ultrix/Tru64 Unix/Linux firmware just emulated kernel and user modes, but the OpenVMS firmware emulated more levels/rings. x86's 5 rings (including "ring -1/hypervisor) could be efficiently emulated by hardware that only implements ring -1 (hypervisor) and ring 3 (user mode).

Edit: Taken to an extreme, you get something like Transmeta's Crusoe that pushed instruction decoding and scheduling into a firmware hypervisor JIT that works on the processor's microcode level. In retrospect, it seems that Crusoe went too far, at least as far as early 2000's technology could go. However, there's still plenty of optimization space in between the latest Intel processors on the extreme hardware complexity side and Transmeta's Crusoe on the extreme firmware complexity side.

Edit 2: In-order processors like (at least early) Intel Atom, P.A. Semi's PWRficient, and Transmeta's Crusoe tend to be more power-efficient. If the architecture designed for it, I could see a case for limited out-of-order hardware capability with hardware tracing and performance counters/reservoir sampling of instructions that caused pipeline stalls. The firmware could then use run-time information to JIT re-order the instruction streams in hotspots that weren't well-served by the hardware's limited out-of-order execution capacity. This might be a viable alternative to ARM's big.LITTLE, where the firmware (or kernel) kicks in to provide a performance boost to hotspots when plugged in, and executes as a simple in-order processor when lower power consumption is desired, without the extra complexity of separate pairs of cores for performance and efficiency. Hardware sampling of which speculations work out and which are wasted would presumably guide the firmware's attempts to efficiently re-optimize the hot spots.


> I wish ARM64, RISC-V, and other modern ISAs had taken DEC Alpha AXP's idea of restricting all of the privileged instructions to the firmware

This is already possible on RISC-V to some extend, by trapping privileged instructions into upper privileged modes. Everything in the ISA is made so it may be achieved cleanly. It also does not allow to detect current privileged mode, so the kernel running in U-mode and trapped on each privileged instruction would never know it's actually not in S-mode.

There is even a software-based hypervisor extension emulator based on that, that brings KVM to non-hypervisor-capable HW: https://github.com/dramforever/opensbi-h


Right, but it's cleaner and better performant to use higher-level upcalls to the hypervisor rather than trapping and emulating every privileged instruction.

As it stands, the hypervisor needs to implement both trap-and-emulate and upcall handlers, and OSes need to implement both running on bare metal and (if they want to perform well on hypervisors) hypervisor upcalls.

If you want your hypervisor to support nested hypervisors, then I guess you'd still need to implement trap-and-emulate in the hypervisor to allow running a hypervisor on top. However, you at least remove the dual paths in the OS kernel if you just disallow the bare-metal case. This also allows a bit more flexibility in hardware implementation as you can change the hardware implementation and the instruction sequence in the hypervisor without needing to modify any legacy OS kernels.


>A while back, I read a Sun patent on not implementing some instructions and emulating them in the kernel's illegal operation trap handler. The whole patent seemed obvious to me, but I'm glad that it was patented and now expired, providing obvious prior art in any attempts to patent it today.

I understand opensbi runs in M mode, taking on that role among others.


> Both hypervisors and OS kernels would be simpler if the platform guaranteed a hypervisor is always present.

This would make an excellent RISC-V proposal!

The nice thing is that RISC-V already has the concept of the HART, you could have a supervisor hart that manages many virtual harts.


So is this, even more so in my taste:

https://github.com/cnlohr/mini-rv32ima/blob/master/mini-rv32...

A RISC-V emulator in one include file.


That's a cool achievement, but I imagine the single-switch decoder is a bit of a limiting design factor if this project grows (disabling/enabling instructions will introduce branches everywhere, and that's needed for proper FPU conformance). Also a bit jealous of how it is so popular by just presenting that Doom runs on a minimal ISA subset, while the VM that literally outperforms QEMU was around for much longer. I definitely should time-travel back and tell myself from 2021 to just show off people Doom, since I was doing the same in my communication circles back then xD


Well, the next best thing is to show it now running Doom! Or Quake.


That's quite copious way to implement it.


I agree a lot, but RVVM implements an API to register new instructions at runtime, so I wanted to leave the core sources as much understandable and macro-free. Plus I don't see how other switch-based approaches in QEMU, or near-mentioned mini-rv32ima are better, so there is no really any ground I could compare upon. I guess about 1K lines for a hugely-performant, readable and extendable interpreter, which itself does calls into a tracing JIT is a fine trade-off to change it at this point)


In 2017, a typo was fixed. The content is from 2013: https://cgit.freedesktop.org/wiki/xorg/log/Development/X12.m...


Yet I'm still running X11, with Wayland still struggling and no X12 in sight.


Not sure it's fair to call it struggling when it's the default in just about every distro and has been smooth and stable for years.


I think it's fair, since every one of those distros ships an X11 fallback. I desperately want Wayland to succeed but exaggerating its current wins won't get us there.

I don't want this post to focus on the negative, though, so I'll suggest a more positive argument: the people who would have been responsible for a hypothetical X12 instead decided to make Wayland. I can't think of a body of experts more likely to make a correct decision, so I have confidence in Wayland as the path forward.


I mean, before Wayland every distro shipped with a text console as a fallback to X11


Fair. I'll admit there are a few rough edges, mainly caused by some apps (Slack) having older versions of certain libraries that makes some functionality (like screen sharing) break.


Shipping a fallback doesn’t mean the alternative is not in daily use.


I'm not up to date on the Linux desktop ecosystem. In what sense is Wayland struggling?


Dudemanguy wrote about its deficiencies 2022-06-11 [0], ex lack of feature parity with X11 and self imposed limitations like only allowing integer scaling (ie to get 1.5 scaling, it uses x3/x2 scaling). For some perspective, consider checking other hn reader reactions to this post [1].

[0] https://dudemanguy.github.io/blog/posts/2022-06-10-wayland-x...

[1] https://news.ycombinator.com/item?id=31752760



It doesn't work reliably on any GPU I own, for any stable version of a linux distro I use. One GPU is too old, the other is too nvidia.


To be fair, I'm on an five-year old laptop with NVIDIA and since last year it almost works well enough to be a daily driver. For some weird reason Chromium doesn't render at all, even though Chrome does. That's the only remaining bug of significance.

Whereas when I tried a year before I had to bail after an hour because many applications would just have a black screen.

It kind of feels like it will take only one more year for this to work well enough (except then the laptop might be so old hardware support ends up lacking)


Wayland uses linux’s gpu abstraction (drm) to work and that’s it. If it fails to work than linux also does, so your setup has some issues.


I have to disable hardware compositing on X11 to get a reliable desktop (and HW rendering in individual apps like firefox). I'm not sure if something similar is possible on Wayland.


It doesn't sound like X11 is running reliably for you either


I restart X11 only when either there's a power failure longer than the battery on my UPS, or I upgrade my kernel, so it's reliable enough.


Having to disable compositing doesn't sound very reliable.


If it works, it works. And some of us never bothered installing a compositor in the first place, so it's hardly a high bar.


Obviously it doesn't work if your workaround is disabling it. It is either bad hardware, or buggy driver. For the latter, it has to be some obscure hardware; popular hardware would have it fixed.


Okay, let's enumerate.

Option 1: Wants to used hardware acceleration, fails, allows you to disable it and actually use your computer.

Option 2: Wants to use hardware acceleration, fails, refuses to allow you to disable anything, literally cannot display graphics.

One of these works, even degraded. The other does not.


I don't dispute that. My claim was, that both options you mention are broken, and for that one "working", "limping" would be a better term.

Certainly not something you would architect a display system around.


I have been using linux for over 20 years and reliable hardware acceleration has always been more "miss" than "hit." This goes all the way back to having to disable hardware cursors on my very first linux setup. I hear the amdgpu driver is pretty solid, and the i915 driver I use on my laptop is great. Nvidia is just a mess (nouveau and the nvidia binary drivers are differently buggy) and the radeon driver is complete garbage.


My first linux machine was 386 with Trident 9000, running Slackware, so I'm pretty aware how linux hardware support developed over time. Maybe I was lucky in picking my hardware, but buggy basic functionality was a big exception (minor bugs were there, like in amdgpu cursor not picking the same LUT as the framebuffer, and being jarring white compared to redshifted desktop; incidentally, windows driver had the same issue at the same time).

Not implemented functionality - sure. I've never got TV out running on Radeon 7500 (RV200) during early 2000s, for example. But basic functionality (today), like freezing texture mapping on a hardware, that has 3d driver implemented and that driver comes with distro, no. But then again, maybe I was lucky in my picks.


Not obscure, just old. ATI Radeon HD 5000 series.


That doesn’t depend on the protocol, I think most implementations can simply choose a so-called “dumb” backend instead of hardware composition.


I've been running Wayland for years. It does everything X11 does at this point and is better in some ways.


Wayland is X12.


2013 is when the format was converted to markdown. No idea when the content is from.


Looks like it started in 2007 [0], though the content was pretty different, there were some updates over the years, and the current version is indeed from 2013 [1]

[0]: https://web.archive.org/web/20071123130628/https://www.x.org...

[1]: https://web.archive.org/web/20131222002042/https://www.x.org...


wanted to see if it was committed on april 1st


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: