The author’ blog was on HN a few days ago as well for an article on SBOMs and Lockfiles. They’ve done a lot of work in the supply-chain security side and are clearly knowledgeable, and yet the blog post got similarly “fuzzified” by the LLM.
> PEP 658 went live on PyPI in May 2023. uv launched in February 2024. uv could be fast because the ecosystem finally had the infrastructure to support it. A tool like uv couldn’t have shipped in 2020. The standards weren’t there yet.
In 2020 you could still have a whole bunch of performance wins before the PEP 658 optimization. There's also the "HTTP range requests" optimization which is the next best thing. (and the uv tool itself is really good with "uv run" and "uv python".)
> What uv drops: Virtual environments required. pip lets you install into system Python by default. uv inverts this, refusing to touch system Python without explicit flags. This removes a whole category of permission checks and safety code.
pip also refuses to touch system Python without explicit flags?
For uv, there are flags that allow it, so it doesn't really "removes a whole category of permission checks and safety code"? uv has "permission checks and safety code" to check if it's system python? I don't think uv has "dropped" anything here.
> Optimizations that don’t need Rust: Python-free resolution. pip needs Python running to do anything.
This seems to me to be implying that python is inherently slow, so yes, this optimization requires a faster language? Or maybe I don't get the full point.
> Where Rust actually matters: No interpreter startup. ... uv is a single static binary with no runtime to initialize.
how they claim fetching from a single index magically solves dependency confusion attacks, when in reality it makes the attack much more trivial and able to succeeded. typical llm syncopation.
> uv picks from the first index that has the package, stopping there. This prevents dependency confusion attacks and avoids extra network requests.
As long as the "first" index is e.g. your organization's internal one, that does ensure that some random thing on PyPI won't override that. A tool that checks every index first still has to have the right rule to choose one.
It is, however, indeed a terrible point. I don't think I've even seen evidence that pip does anything different here. But it's the sort of problem best addressed in other ways
By "syncopation" perhaps you mean "sycophancy"? I don't see how musical rhythms are relevant here.
Supporting pre-compiled native binary packages. Look at the manylinux effort for eg: https://github.com/pypa/manylinux, which spans a decade to ensure that glibc changes did not break python wheels. Every other language required a full-build pipeline for binary extensions: node-gyp for eg. Even where it was supported (such as PECL), compatibility with the language and system was always left as an afterthought and underspecified. The end-result was that the only way to "safely" install binary extensions for most languages was via distros or third-party repositories, but never via the package registry.
We map strictly to Schema.org for all transactional data (Price, Inventory, Policies). This ensures legal interoperability.
But Schema.org describes what a product is, not how to sell it.
So we extend it. We added directives like @SEMANTIC_LOGIC for agent behavior. We combine standard definitions for safety with new extensions for capability.
It targets Consumer Protection and Truth-in-Advertising laws globally.
The 'compliance bit' is Price Transparency.
If an AI quotes a price as 'final' but checkout adds hidden fees or tax, that is a deceptive practice.
Our spec enforces fields like TaxIncluded and TaxNote. It instructs the Agent to disclose whether the price is net or gross.
It prevents the AI from accidentally committing fraud via misleading omissions.
reply