The fun is in the journey, not necessarily the destination, to me. Should the goal of dancing be to get it over with as quickly as possible too? Should the ideal piece of music you play be a single crashing chord so you can get done with playing the piano? To me these are direct analogues.
Sure, there are some boring rote parts of coding, just like sawing boards might not be the most enjoyable part of woodworking. I guess you could use AI as the analog of power tools (would that be using AI to generate awk and jq command lines?). But I wouldn't want to use a CNC router and take all the fun and skill out of the craft, nor do I find agentic AI enjoyable.
And AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do) or with company internal frameworks.
>> The fun is in the journey, not necessarily the destination, to me.
I fully respect this. Lots of craftspeople love to work with wood by hand whilst factories build furniture on an industrial scale.
>> AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do)
But you are very wrong about embedded systems development and AI. I do a huge amount of microcontroller programming and AI is a massive productivity multiplier and the LLMs are extremely good at it.
> But you are very wrong about embedded systems development and AI. I do a huge amount of microcontroller programming and AI is a massive productivity multiplier and the LLMs are extremely good at it.
Probably true if you use an SDK there are lots of examples for. I have worked with embassy in Rust and AIs were not good, not with a company internal SDK in C++ at work. They will frequently hallucinate non-existing methods or get the parameters wrong. And for larger systems (e.g. targeting embedded Linux) they can't keep enough context in their head to work with large (millions of lines) existing proprietary code bases. They make mistakes and you end up with unmaintainable soup. Humans can learn on the job, AIs can't yet do that.
Linux would support it for sure. It even still has support for several old NICs (it was only the other day I saw a news item about some old protocol from the early 90s finally being removed). But I can imagine no one wants to develop a new such driver.
And if you want to sell to consumers you need Windows and Mac support, and then it easier to just adapt to existing interfaces.
Couldn't they just list link speed and average speed (however that is measured, before or after protocol overhead for example) as two separate lines on the product page?
> However the big difference (in most languages) is that functions can take arbitrarily long. Array access either succeeds or fails quickly.
For some definition of quick. Modern CPUs are usually bottlenecked by memory bandwidth and cache size. So a function that recomputes the value can often be quicker than a look up table, at least outside of microbenchmarks (since in microbenchmarks you won't have to compete with the rest of the code base about cache usage).
Of course this depends heavily of how expensive the function is, but it is worth having in mind that memory is not necessarily quicker than computing again. If you need to go to main memory, you have something on the order of a hundred ns that you could be recomputing the value in instead. Which at 2 GHz would translate to 200 clock cycles. What that means in terms of number of instructions depends on the instruction and number of execution units you can saturated in the CPU, if the CPU can predict and prefetch memory, branch prediction, etc. But RAM is really slow. Even with cache you are talking single digit ns to tens of ns (depending on if it is L1, L2 or L3).
I've been watching those Kaze Emanuar videos on his N64 development, and it's always so weird to me when "doing the expensive computation again" is cheaper than "using the precomputed value". I'm not disputing it, he seems to have done a lot of research and testing confirming the results and I have no reason to think he's lying, but it's so utterly counter-intuitive to me.
I haven't looked into N64, but the speed of CPUs has been growing faster than the speed of RAM for decades. I'm not sure when exactly that started, probably some time in the late 80s or early 90s, since that is about when PCs started getting cache memory I believe.
I wonder if a breakpoint was out-of-order execution. Many computations would use some values from memory plus other that could be computed, and out-of-order execution would allow the latter to proceed while waiting on memory for the former. That would improve utilization and be a 'win' even if the recomputation in isolation would be no faster than the memory load.
That is definitely the wrong thing to do. It isn't rude to use the bell, and as a pedestrian I appreciate a single ring (obviously, don't ring like a madman either). Playing music loudly in a public space is way more rude than using your bicycle bell.
Too many close calls with inattentive pedestrians in my area. I ring, no move, or worse, they get startled, and turn around into the middle of the bike lane. If I have to choose between coming off as rude and keeping my brain enclosed, I know what to do.
You need to ring when you are still some way away, so they have time to react, and if they don't you can slow down and ring a second time. And travel at an appropriate speed for the location.
(I have an ebike, so this is especially important. Mine is a legal one: 25 km/h max, 250 W, etc. If yours is faster, this is even more important.)
i like to sing "geeettt outt of the biiiiike laaaannne!" as loud as possible with my big fat tenor voice as i ride past them ringing my bell repeatedly the entire time. a single bell ring never seems to get anyone's attention
A watchdog is a piece of hardware that will automatically restart the chip if it detects the code as being stuck. The way it detects this is that you have to poke a register of the watchdog every so often, and if the register hasn't been poked for a certain timeout (usually configurable), the chip is restarted.
Watchdogs exist on MCUs but also on some "proper" computers. The Raspberry Pi has one for example.
There's generally at least one watchdog device available in most PCs delivered in last decade, but it's not always utilized. Essentially at one point an intel southbridge integrated a basic watchdog on all models, and it started to just... be included.
So these days you can find a variation on the TCO timer watchdog in most PCs, even if the exact implementation varies so we now have a bunch of drivers for the different variants.
Linux doesn't see one on my Ryzen 5600X desktop at least. My Intel Skylake Thinkpad does seem to have two though (iTCO as well as INT3F0D, not sure what that is, but if I interpret the files under /sys correctly it belongs to the LPC/eSPI controller PCIe device, while the TCO watchdog is found under the SMBus PCIe device).
In both cases they do have software watchdogs (NMI based) which relies on a hardware timer triggering an NMI in the kernel. But that relies on the NMI handler still working, which is not as good as a real HW watchdog.
Apparently it depends to a little bit on how the motherboard is designed, theoretically SP5100 watchdog which is part of the CPU logic in recent ryzens, apparently, is supposed to be enabled if the motherboard is designed with IPMI in mind.
For whatever reason, it's enabled on my laptop despite it obviously not having IPMI support :)
“All CPUs” would probably be 99.9999% accurate. It’s just one of those fundamental functions you want in a processor. Whether it’s exposed in the OS is a different matter.
AMD doesn't have it. I just confirmed by grepping through dmesg and journalctl -b, the only time it appears is due to UPS driver notifications (unrelated).
If you had enough users and demonstrated the ability to securely manage a PKI, then I don’t see why not. But if it’s just you on a server in your garage, then there would be no advantage to either you or to the ecosystem for PyPI to federate with your server.
That’s why API tokens are still supported as a first-class authentication mechanism: Trusted Publishing is simply not a good fit in all possible scenarios.
> if it’s just you on a server in your garage, then there would be no advantage to either you or to the ecosystem for PyPI to federate with your server.
Why not leave decision on what providers to trust to users, instead of having a centrally managed global allowlist at the registry? Why should he registry admin be the one to decide who is fit to publish for each and all packages?
> Why not leave decision on what providers to trust to users, instead of having a centrally managed global allowlist at the registry?
We do leave it to users: you can always use an API token to publish to PyPI from your own developer machine (or server), and downstreams are always responsible for trusting their dependencies regardless of how they’re published.
The reason Trusted Publishing is limited at the registry level is because it takes time and effort (from mostly volunteers) to configure and maintain for each federated service, and the actual benefit of it rounds down to zero when a given service has only one user.
> Why should he registry admin be the one to decide who is fit to publish for each and all packages?
Per above, the registry admin doesn’t make a fitness decision. Trusted Publishing is an optional mechanism.
(However, this isn’t to say that the registry doesn’t reserve this right. They do, to prevent spamming and other abuse of the service.)
They’re running the most popular registry but nothing says you can’t use your own to implement whatever policy you want. The default registry has a tricky balance of needing to support inexperienced users while also only having a very modest budget compared to the companies which depend on it, and things like custom authentication flows are disproportionately expensive.
They seem to manage to handle account signups with email addresss from unknown domain names just as fine as for hotmail.com and gmail.com. I don't see how this is any different.
The whole point of standards like OIDC (and supposedly TP) is that there is no need for provider-specific implemenations or custom auth flows as long as you follow the spec and protocol. It's just some fields that can be put in a settings UI configurable by the user.
It’s completely different. An email signup doesn’t involve a persistent trust relationship between PyPI and an OIDC identity provider. The latter imposes code changes, availability requirements, etc.
(But also: for completely unrelated reasons, PyPI can and will ban email domains that it believes are sources of abuse.)
According to their docs, they have a "have high standards for overall reliability and security in the operation of a supported Identity Provider: in practice, this means that a home-grown or personal use IdP will not be eligible."
If you think your setup meets those standards, you'll need to use Microsoft (TM) GitHub (R) to contact them.
Back when I started with PyPI, manual upload through the web interface was the only possibility. Have they gotten rid of that?
My understanding is that "trusted publishing"[0] was meant as an additional alternative to that sort of manual processing. It was never decentralized. As I recall, the initial version only supported GitHub and (I think) GitLab.
[0] I do not trust Microsoft as an intermediary to my software distribution. I don't use Microsoft products or services, including GitHub.
Yes, this makes contacting PyPI support via GitHub impossible for me. That is one of the reasons I stopped using PyPI and instead distribute my wheels from my own web site.
You could check that mangled symbols match, and have static tables with hashes of structs/enums to make sure layouts match. That should cover low level ABI (though you would still have to trust the compiler that generated the mangling and tables).
A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.
So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.
Sure, there are some boring rote parts of coding, just like sawing boards might not be the most enjoyable part of woodworking. I guess you could use AI as the analog of power tools (would that be using AI to generate awk and jq command lines?). But I wouldn't want to use a CNC router and take all the fun and skill out of the craft, nor do I find agentic AI enjoyable.
And AIs fail badly anyway when you are doing things not found much online, e.g. in embedded microcontroller development (which I do) or with company internal frameworks.
reply