Ethernet can be run over copper or fiber cabling, it's not an alternative to fiber networking. Assuming you meant what's the advantage of fiber over copper: you can use faster speeds, longer distances less power on fiber plus it's not electrically coupled.
(speeds: 100 gig today, but faster speeds are coming.)
PVC used in water pipes had some surprises, the lifespan there turned out to be less than optimistically expected when PVC pipes came to the market. 100 years might be hard to test for...
Depends on your definition of graceful but the C standard doesn't preclude handling it and there's POSIX interfaces such as sigaltstack / sigsetjmp etc that fit and indeed some code like language runtimes use this to react to stack exhaustion (having first set up guard pages etc).
Clojure CLI (aka deps.edn) came out in 2018 and in the survey "how do you manage your dependencies?" question crossed 50% usage in early 2020. So for 6-8 years now.
WebGL and WebGPU must robustly defend against malicious web content making the API calls, just like other browser JavaScript APIs, which makes for some overhead and resulted in leaving out some features of the underlying APIs.
Vulkan has also evolved a lot and WebGPU doesn't want to require new Vulkan features, lacking for example bindless textures, ray tracing etc.
All APIs must robustly defend against malicious content, this is not something unique to WebGL and WebGPU.
Programs can use Vulkan, D3D, OpenGL, OpenCL, etc, to ex: read memory that isn't in your program's space via the GPU/driver/OS not properly handling pointer provenience. Also, IOMMUs are not always setup correctly, and they are also not bug free, ex: Intel's 8 series.
Using hardware to attack hardware is not new, and not a uniquely web issue.
> All APIs must robustly defend against malicious content, this is not something unique to WebGL and WebGPU.
This is not the case for C/C++ APIs. A native code application using your API can already execute arbitrary code on your computer, so the library implementing eg OpenGL is not expected to be a security boundary and does not need to defend against for example memory safety bugs to get RCE, info leakage, etc by for example sending in booby trapped pointers or sending in crafted inputs designed to trigger bugs in your API internals.
The kernel side stuff is of course supposed to be more robust but also contains a much smaller amount of code than the user facing graphics API. And robustness there is not taken as seriously because they're not directly internet-facing interfaces so browsers can't rely on correctness any protections there.
Which brings us to: drivers throughout the stack are generally very buggy, and WebGL/WebGPU implementations also have to take responsibility for preventing exploitation of those bugs by web content, sometimes at rather big performance cost.
To see what it's like you might browse https://chromereleases.googleblog.com/ and search for WebGPU and WebGL mentions and bug bounty payouts in the vulnerabilities such as
[$10000.0] [448294721] High CVE-2025-14765 Use after free in WebGPU.
[TBD][443906252] High CVE-2025-12725: Out of bounds write in WebGPU.
[$25000.0] [442444724] High CVE-2025-11205 Heap buffer overflow in WebGPU.
[$15000][1464038] High CVE-2023-4072: Out of bounds read and write in WebGL.
[$TBD][1506923] WebGPU High CVE-2024-0225
etc.
C/C++ memory safety is hard, even when you're the biggest browser vendor trying your hardest to expose C APIs to JS bindings safely.
There were a lot of WebGL vulnerabilities in a constant stream as well earlier, before WebGPU became more lucrative for bug bounties.
When talking about range I mean not naively remapping it to fit display media, but compressing it to fill the range of the media in a way that achieves the desired look. (Accounting for non-linearity of colour vision is part of that mentioned in the article.)
That is roughly the number of new requests per second, but these are not just light web requests.
The git transport protocol is "smart" in a way that is, in some ways, arguably rather dumb. It's certainly expensive on the server side. All of the smartness of it is aimed at reducing the amount of transfer and number of connections. But to do that, it shifts a considerable amount of work onto the server in choosing which objects to provide you.
If you benchmark the resource loads of this, you probably won't be saying a single server is such an easy win :)
Using the slowest clone method they measured 8s for a 750 MB repo, 0.45s for a 40MB repo. appears to be linear so 1.1s for 100MB should be a valid interpolation.
And remember we're using worst case assumptions in places (using the slowest clone method, and numbers from old hardware). In practice I'd bet a fastish laptop would suffice.
edit: actually on closer look at the github reported numbers the interpolation isn't straightforward: on the bigger 750MB repo the partial clone is actually said to be slower then the base full clone. However this doesn't change the big picture that it'll easily fit on one server.
.. or a cheaper one as we would be using only tens of cores in the above scenario. Or you could use a slice of an existing server using virtualization.
(speeds: 100 gig today, but faster speeds are coming.)
reply