It's fascinating to see such work. By the grace of NPR, apparently the "transient inflation" narrative is officially inoperative, and plebian concerns about the price of eggs are now worthy!
That's a plausible lag: credible purity figures are not sourced from Mexican drug cartels. They come from laboratories at the end of a long chain of custody complicated by legal machinations, dealing with contraband having no provenance beyond its date of seizure. That it takes only "months" to wend its way though the byzantine and corrupt legal system, and the bankers hours academic process of laboratory professionals, is actually admirable.
> which a habitual user would compensate for by taking twice as much
Habitual users are operating in a market, seeking value. They cannot afford to simply double their spend, and I'll give you one guess as to how quickly purity drops are reflected by price drops in the narcotics business, because that's all a person of sound mind should need.
No, when the purity dropped, users paid the same and got less, and died less. Believe me, I understand why this finding is unwelcome: it serves to put arrows in the "drug war" quiver, and that is anathema, in my mind as well. But knee-jerk thinking, ultimately, isn't helpful. Further, I have complete faith that the ability of drug dealers and drug users of America to produce disturbing body counts will not be diminished for long.
> They come from laboratories at the end of a long chain of custody complicated by legal machinations, dealing with contraband having no provenance beyond its date of seizure. That it takes only "months" to wend its way though the byzantine and corrupt legal system, and the bankers hours academic process of laboratory professionals, is actually admirable.
But... this relies on the idea that the purity numbers are based on "time of test" not "date of seizure". This seems like a pretty obvious thing they would have accounted for. Do you have any evidence that the published data for purity levels is delayed by several months?
> this relies on the idea that the purity numbers are based on "time of test" not "date of seizure"
No, the idea doesn't rely on "time of test" vs "date of seizure". There is no real provenance for any of this. There is no auditable trail for when any given batch of narcotics was manufactured, when it appeared in the US, how long it took to disseminate to domestic dealers, when it may have been further cut by domestic dealers, when it was sold, and when it was actually used. Even the seizure dates are dubious, given haphazard and inconsistent law enforcement handling and record keeping. There are also sampling biases, because some legal jurisdictions and law enforcement organizations are more or less cooperative than others.
All I claimed was that a delay was plausible. I am not obligated to become a narcotics market researcher in defense of my modest claim, and given the nature of all this, no amount of such effort is likely to be sufficient for you in any case.
There may have been a early C without structs (B had none,) but according to Ken Thompson, the addition of structs to C was an important change, and a reason why his third attempt rewrite UNIX from assembly to a portable language finally succeeded. Certainly by the time the recently recovered v4 tape was made, C had structs:
~/unix_v4$ cat usr/sys/proc.h
struct proc {
char p_stat;
char p_flag;
char p_pri;
char p_sig;
char p_null;
char p_time;
int p_ttyp;
int p_pid;
int p_ppid;
int p_addr;
int p_size;
int p_wchan;
int *p_textp;
} proc[NPROC];
/* stat codes */
#define SSLEEP 1
#define SWAIT 2
#define SRUN 3
#define SIDL 4
#define SZOMB 5
/* flag codes */
#define SLOAD 01
#define SSYS 02
#define SLOCK 04
#define SSWAP 010
In your high level "You might not want to use it if" points, you mention Docker but not why, and that's odd. I happen to know why: io_uring syscalls are blocked by default in Docker, because io_uring is a large surface area for attacks, and this has proven to be a real problem in practice. Others won't know this, however. They also won't know that io_uring is similarly blocked in widely used cloud sandboxes, Android, and elsewhere. Seems like a fine place to point this stuff out: anyone considering io_uring would want to know about these issues.
Very good point! You’re absolutely right: The fact that io_uring is blocked by default in Docker and other sandboxes due to security concerns is important context, and we should have mentioned it explicitly there. We'll update the post, and happy to incorporate any other caveats you think are worth calling out.
I believe it's possible, but that it's a hard problem requiring great effort. I believe this is a opportunity to apply formal methods ah la seL4, that nothing less will be sufficient, and that the value of io_uring is great enough to justify it. That will take a lot of talent and hours.
I admire io_uring. I appreciate the fact that it exists and continues despite the security problems; evidence that security "concerns" don't (yet) have a veto over all things Linux. The design isn't novel. High performance hardware (NICs, HBAs, codecs, etc.) have used similar techniques for a long time. Io_uring only brings this to user space and generalizes it. I imagine an OS and hardware that fully inculcate the pattern, obviating the need for context switches, interrupts, blocking and other conventional approaches we've slouched into since the inception of computing.
Alternatively, it requires cloud providers and such losing business if they refuse to support the latest features.
The "surface area" argument against io_uring can apply to literally any innovation. Over on LWN, there's an article on path traversal difficulties that mentions people how, because openat2(2) is often banned as inconvenient to whitelist using seccomp, eople have to work around path traversal bugs using fiddly, manual, and slow element-by-element path traversal in user space.
Ridiculous security theater. A new system call had a vulnerability in 2010 and so we're never able to take practical advantage of new kernel features ever?
(It doesn't help that gvisor refuses to acknowledge the modern world.)
Great example of descending into a shitty equilibrium because the great costs of a bad policy are diffuse but the slight benefits are concentrated.
The only effective lever is commercial pressure. All the formal methods in the world won't help when the incentive structure reinforces technical obstinacy.
I can't agree with this. There is ample evidence of serious flaws since 2021. I hate that. I wish it weren't true. But an objective analysis of the record demands that view.
Here is a fun one from September (CVE-2025-39816): "io_uring/kbuf: always use READ_ONCE() to read ring provided buffer lengths."
That is an attackers wet dream right there: bump the length and exfiltrate sensitive data. And it wasn't just some short lived "Linus's branch" work no one actually ran: it existed for a time in, for example, Ubuntu 24.04 LTS (circa 2024 release date.) I just cherry picked that one from among many.
Maybe not so doable. The whole point of io_uring is to reduce syscalls. So you end up just three. io_uring_setup, io_uring_register, io_uring_enter
There is now a memory buffer that the user space and the kernel is reading, and with that buffer you can _always_ do any syscall that io_uring supports. And things like strace, eBPF, and seccomp cannot see the actual syscalls that are being called in that memory buffer.
And, having something like seccomp or eBPF inspect the stream might slow it down enough to eat the performance gain.
There is some interesting ongoing research on eBPF and uring that you might find interesting, e.g., RingGuard: Guarding io_uring with eBPF (https://dl.acm.org/doi/10.1145/3609021.3609304
).
Ain’t eBPF hooks there so you can limit what a cgroup/process can do, not matter what API it’s calling. Like disallowing opening files or connecting sockets altogether.
No. A batch of submission queue entries (SQEs) can be partially completed, whereas an ACID database transaction is all or nothing. The syscalls performed by SQEs have side effects that can't reasonably be undone. Failures of operations performed by SQEs don't stop or rollback anything.
Think of io_uring as a pair of unidirectional pipes. You shove syscalls and (pointers to) data into one pipe and the results (asynchronously) gush out of the other pipe, errors and all. Each pipe is actually a separate block of memory shared between your process and the kernel: you scribble in one and read from the other, and the kernel does the opposite.
Andrew Grove, in his later years, held the same view. He explained essentially everything we had seen till then and since: the decline of US semiconductor manufacturing, the loss of talent, critical dependence on foreign nations and companies, etc.
It turns out you can't cherry pick the intellectual work and fob the rest off on foreign supply and still maintain global leadership and domestic prosperity. The whole stack must be at least competitive domestically. Only trade policy can achieve this.
It shouldn't. It's never really been perfected across native GUI APIs after 40+ years: just various degrees of "good enough," plus fobbing it off to web stacks.
Anyhow, I've been playing with gioui, which is golang rendering in a lightweight <canvas>-like. Really nice: fast, small, cross platform GUI with just Go. Scale expectations appropriately.
What is the risk calculation one would perform before attempting to invade Taiwan while Trump is calling shots? Whatever else you think about Trump, for better or worse, he is not bound by establishment prerogatives: make the "wrong" move, as Trump exclusively defines it, and anything — literally any conceivable thing plus a distant horizon of things you are cognitively incapable of conceiving — might happen.
Maduro is in a cage somewhere pondering this right now. Iran's leaders are all thinking about the threats Trump made not 48 hours ago, possibly to the great benefit of rebels in the streets right now. Federal investigators are closing in on Walz and friends in Minnesota right now: he could find himself in a cell within earshot of Maduro at any time.
Garage looks really nice: I've evaluated it with test code and benchmarks and it looks like a winner. Also, very straightforward deployment (self contained executable) and good docs.
But no tags on objects is a pretty big gap, and I had to shelve it. If Garage folk see this: please think on this. You obviously have the talent to make a killer application, but tags are table stakes in the "cloud" API world.
I really, really appreciate that Garage accommodates running as a single node without work-arounds and special configuration to yield some kind of degraded state. Despite the single minded focus on distributed operation you no doubt hear endlessly (as seen among some comments here,) there are, in fact, traditional use cases where someone will be attracted to Garage only for the API compatibility, and where they will achieve availability in production sufficient to their needs by means other than clustering.
Arbitrary name+value pairs attached to S3 objects and buckets, and readily available via the S3 API. Metadata, basically. AWS has some tie-ins with permissions and other features, but tags can be used for any purpose. You might encode video multiple times at different bitrates, and store the rate in a tag on each object, for example. Tags are an affordance used by many applications for countless purposes.
Thanks! I understand what tags are, but not what an "object" was in this context. Your example of multiple encodings of the same video seems very good.
Why not? SMB is no slouch. Microsoft has taken network storage performance very seriously for a long time now. Back in the day, Microsoft and others (NetApp, for instance,) worked hard to extend and optimize SMB and deliver efficient, high throughput file servers. I haven't kept up with the state of the art recently, but I know there have been long stretches where SMB consistently led the field in benchmark testing. It also doesn't hurt that Microsoft has a lot of pull with hardware manufacturers to see their native protocols remain tier 1 concerns at all times.
I think a lot of people have a hard time differentiating the underlying systems from what they _see_ and use it to bash MS products.
I heard that it was perhaps recently fixed, but copying many small files was multiple times faster to do via something like Total Commander vs the built in File Explorer (large files goes equally fast).
People seeing how slow Explorer was to copy would probably presume that it was a lower level Windows issue if they had a predisposed bias against Microsoft/Windows.
My theory about Explorers sluggishness is that they added visual feedback to the copying process at some point, and for whatever reason that visual feedback is synchronous/slow (perhaps capped at the framerate, thus 60 files a second), whilst TC does updating in the background and just renderers status periodically whilst the copying thread(s) can run at full speed of what the OS is capable of under the hood.
I dunno about Windows Explorer, but macOS’ finder seems to hash completed transfers over SMB (this must be something it can trigger the receiver to do in SMB itself, it doesn’t seem slow enough for the sender to be doing it on a remote file) and remove transferred files that don’t pass the check.
I could see that or other safety checks making one program slower than another that doesn’t bother. Or that sort of thing being an opportunity for a poor implementation that slows everything down a bunch.
A problem with Explorer, that it also shares with macOS Finder[1], is that they are very much legacy applications with features piled on top, and Explorer was never expected to be used for heavy I/O work and tends to do things the slower way possible, including doing things in ways that are optimized for "random first time user of windows 95 who will have maybe 50 files in a folder"
[1] Finder has parts that show continued use of code written for MacOS 9 :V
This blows my mind. $400B in annual revenue and they can't spare the few parts per million it would take to spruice up the foundation of their user experience.
This is speculation based on external observation, nothing internal other than rumours:
A big, increasing over last decade, chunk of that is fear that they will break the compatibility - or otherwise drop in shared knowledge. To the point that the more critical parts the less anyone wants to touch them (heard that ntfs.sys is essentially untouchable these days, for example).
And various rules that used to be sacrosanct are no longer followed, like the "main" branch of Windows source repository having to always build cleanly every night (fun thing - Microsoft is one of the origins of nightly builds as a practice)
Less people are trusted to touch ntfs.sys due to lack of experience, thus they never gain it and that in turn means less work and in turn means even less people have proved themselves trustworthy enough to work on it.
Until nobody remains in the company that is trusted enough.
Microsoft gives them a lot of ammo. While, as I said, Microsoft et al. have seen that SMB is indeed efficient, at the same time security has been neglected to the point of being farcical. You can see this in headlines as recent as last week: Microsoft is only now, in 2025, deprecating RC4 authentication, and this includes SMB.
So while one might leverage SMB for high throughput file service, it has always been the case that you can't take any exposure for granted: if it's not locked down by network policies and you don't regularly ensure all the knobs and switches are tweaked just so, it's an open wound, vulnerable to anything that can touch an endpoint or sniff a packet.
Agreed, but that used to be the difference between MS and Google.
MS would bend backwards to make sure those enterprise Windows 0.24 boxes will still be able to connect to networks because those run some 16bit drivers for CNC machines.
Meanwhile Google decided to kill a product the second whoever introduced it on stage walked off it.
Azure is a money-maked for MS, and wouldn't be so without those weird legacy enterprise deployments. The big question is if continuing to increase a posture about about security together with an "cloud" focus is actually in their best interest or if retaining those legacy enterprises would have been smarter.
I have a cheap samsung from 5 years ago that pops up a dialog when it boots. I've never read it or agreed to it. It goes away after about 5 seconds. After that I stream using HDMI and all is well. It's also never been connected to a network.
Can't say what other TVs do, but this one works fine without TOS etc. If there is some feature or other that doesn't work due to this, I can say I've never missed it.
It's fascinating to see such work. By the grace of NPR, apparently the "transient inflation" narrative is officially inoperative, and plebian concerns about the price of eggs are now worthy!
Will wonders never cease?
reply