Go look at profiles for programs which have been written with performance in mind. Operating systems, databases, game engines, web servers, some compilers, video/audio/3d editing packages come to mind. I 100% guarantee these programs do not spend the majority of their runtimes in a tiny section of code. What you said is nearly-unilaterally untrue, at least for programs that care about real performance.
I do write and profile software of that kind and this experience is why I know this isn't a myth. Any mature program has a whole lot of code that actually isn't performance critical at all. For example, 3d software needs a really huge amount of GUI and other support code that isn't performance critical at all. The performance hotspots are really just individual functions doing the core of the processing work for any of the features it offers. The initiation/scaffolding code around that just doesn't matter. The same translates to all other software that that I have worked on.
Static web servers I've actually seen spend most of their time in a couple of very hot paths (mostly the kernel's TCP stack). The others I agree with 100%, and also of course if your web server is doing any dynamic page work. Web browsers, too, and probably many important categories of software.
I believe the grand vision for Tarkov was for basically the whole world to be outside/dungeon. Kinda sad they didn't have the technical skill to pull off open world. That would have been an interesting gaming experience.
Cylindrical straw not included. Limited time offer. Warranty may be void if spaceship uses any reaction wheel or propulsion system. Other exclusions and limitations apply, see ...
There are two things one might care about when computing an SDF .. the isosurface, or the SDF itself.
If you only care about the isosurface (ie. where the function is 0), you can do any ridiculous operations you can think of, and it'll work just fine. Add, sub, multiply, exp .. whatever you want. Voxel engines do this trick a lot. Then it becomes more of a density field, as apposed to a distance field.
If you care about having a correct SDF, for something like raymarching, then you have to be somewhat more careful. Adding two SDFs does not result in a valid SDF, but taking the min or max of two SDFs does. Additionally, computing an analytical derivative of an SDF breaks if you add them, but you can use the analytical derivative if you take a min or max. Same applies for smooth min/max.
To add some more detail, the max of two SDFs is a correct SDF of the intersection of the two volumes represented by the two SDFs, but only on the inside and at the boundary. On the outside it's actually a lower bound.
This is good enough for rendering via sphere tracing, where you want the sphere radius to never intersect the geometry, and converge to zero at the boundary.
A particular class of fields that have this property is fields with gradient not greater than one.
For example, linear blends of SDFs. So given SDFs f and g you can actually do (f(pos)+g(pos))/2 and get something you can render out the other side. Not sure what it will look like, or if it has some geometrical interpretation though.
Note that speed of convergence suffers if you do too many shenanigans.
I did some simple experiments and fairly swiftly discovered where I went wrong. I'm still not totally convinced that there isn't something clever that can be done for more operations.
My next thought is maybe you can do some interesting shenanigans by jumping to the nearest point on one surface then calculating a modulation that adjusts the
distance by an amount. I can certainly see how difficult it would become if you start making convex shapes like that though. There must be a way to take the min of a few candidates within the radius of a less precise envelope surface.
No I was thinking a hard min, but one that finds a greedy but inaccurate distance and then a refinement takes some samples that measure nearest within a radius. This would handle modulations of the shape where it folded back upon itself as long as they don't fold within the subsample radius.
It's multi sample but selective rather than weighted.
I owe iq so much; a living legend. Inigo, if you happen to ever read this, thanks so much for all the work you've published. Your Youtube videos (not to mention shadertoy) sparked an interest in graphics I never knew I had.
For anyone that's unfamiliar, his Youtube videos are extremely well put together, and well worth the handful of hours to watch.
I can definitely say I wouldn't know half of what I do and probably wouldn't have kept at it with writing GLSL and learning more about how GPUs really work without a lot of his freely shared knowledge over the years.
His articles on his website are very much worth a deep read too!
> "tagged" unions of ADT languages like Haskell are arguably pretty clearly inferior to the "untagged" unions of TypeScript
dude .. wut?? Explain to me exactly how this is true, with a real world example.
From where I stand, untagged unions are useful in an extremely narrow set of circumstances. Tagged unions, on the other hand, are incredibly useful in a wide variety of applications.
Example: Option<> types. Maybe a function returns an optional string, but then you are able to improve the guarantee such that it always returns a string. With untagged unions you can just change the return type of the function from String|Null to String. No other changes necessary. For the tagged case you would have to change all(!) the call sites, which expect an Option<String>, to instead expect a String. Completely unnecessary for untagged unions.
A similar case applies to function parameters: In case of relaxed parameter requirements, changing a parameter from String to String|Null is trivial, but a change from String to Option<String> would necessitate changing all the call sites.
> From where I stand, untagged unions are useful in an extremely narrow set of circumstances. Tagged unions, on the other hand, are incredibly useful in a wide variety of applications.
I think your Option/String example is a real-world tradeoff, but it’s not a slam-dunk “untagged > tagged.”
For API evolution, T | null can be a pragmatic “relax/strengthen contract” knob with less mechanical churn than Option<T> (because many call sites don’t care and just pass values through). That said, it also makes it easier to accidentally reintroduce nullability and harder to enforce handling consistently, the failure mode is “it compiles, but someone forgot the check.”
In practice, once the union has more than “nullable vs present”, people converge to discriminated unions ({ kind: "ok", ... } | { kind: "err", ... }) because the explicit tag buys exhaustiveness and avoids ambiguous narrowing. So I’d frame untagged unions as great for very narrow cases (nullability / simple widening), and tagged/discriminated unions as the reliability default for domain states.
For reliability, I’d rather pay the mechanical churn of Option<T> during API evolution than pay the ongoing risk tax of “nullable everywhere.
My post argues for paying costs that are one-time and compiler-enforced (refactors) vs costs that are ongoing and human-enforced (remembering null checks).
I believe there is a misunderstanding. The compiler can check untagged unions just as much as it can check tagged unions. I don't think there is any problem with "ambiguous narrowing", or "reliability". There is also no risk of "nullable everywhere": If the type of x is Foo|Null, the compiler forces you to write a null check before you can access x.bar(). If the type of x is Foo, x is not nullable. So you don't have to remember null checks (or checks for other types): the compiler will remember them. There is no difference to tagged unions in this regard.
I think we mostly agree for the nullable case in a sound-enough type system: if Foo | null is tracked precisely and the compiler forces a check before x.bar, then yes, you’re not “remembering” checks manually, the compiler is.
Two places where I still see tagged/discriminated unions win in practice:
1. Scaling beyond nullability. Once the union has multiple variants with overlapping structure, “untagged” narrowing becomes either ambiguous or ends up reintroducing an implicit tag anyway (some sentinel field / predicate ladder). An explicit tag gives stable, intention-revealing narrowing + exhaustiveness.
2. Boundary reality. In languages like TypeScript (even with strictNullChecks), unions are routinely weakened by any, assertions, JSON boundaries, or library types. Tagged unions make the “which case is this?” explicit at the value level, so the invariant survives serialization/deserialization and cross-module boundaries more reliably.
So I’d summarize it as: T | null is a great ergonomic tool for one axis (presence/absence) when the type system is enforced end-to-end. For domain states, I still prefer explicit tags because they keep exhaustiveness and intent robust as the system grows.
If you’re thinking Scala 3 / a sound type system end-to-end, your point is stronger; my caution is mostly from TS-in-the-wild + messy boundaries.
I think the real promise of "set-theoretic type systems" comes when don't just have (untagged) unions, but also intersections (Foo & Bar) and complements/negations (!Foo). Currently there is no such language with negations, but once you have them, the type system is "functionally complete", and you can represent arbitrary Boolean combination of types. E.g. "Foo | (Bar & !Baz)". Which sounds pretty powerful, although the practical use is not yet quite clear.
> For the tagged case you would have to change all(!) the call sites
Yeah, that's exactly why I want a tagged union; so when I make a change, the compiler tells me where I need to go to do updates to my system, instead of manually hunting around for all the sites.
---
The only time an untagged union is appropriate is when the tag accounts for an appreciable amount of memory in a system that churns through a shit-ton of data, and has a soft or hard realtime performance constraint. Other than that, there's just no reason to not use a tagged union, except "I'm lazy and don't want to", which, sometimes, is also a valid reason. But it'll probably come back to bite you, if it stays in there too long.
> > For the tagged case you would have to change all(!) the call sites
> Yeah, that's exactly why I want a tagged union; so when I make a change, the compiler tells me where I need to go to do updates to my system, instead of manually hunting around for all the sites.
You don't have to do anything manually. There is nothing to do. Changing the return type of a function from String|Null to String is completely safe, the compiler knows that, so you don't have to do any "manual hunting" at call sites.
C doesn't support any untagged unions (or intersections) in the modern sense. In a set-theoretic type system, if you want to call a method of Foo, and the type of your variable is Foo|Bar|Baz, you have to do a type check for Bar and Baz first, otherwise the compiler won't compile.
If I have an untagged union in <language_of_your_choice>, and I'm iterating over an array of elements of type `Foo|Bar|Baz`, and I have to do a dynamic cast before accessing the element (runtime typecheck) .. I believe that must actually be a tagged union under the hood, whether or not you call it a tagged union or not... right? ie. How would the program possibly know at runtime what the type of a heterogeneous set of elements is without a tag value to tell it?
I believe that, by the description provided, most languages that you're talking about must actually represent 'untagged unions' as tagged unions under the hood. See my sibling comment. I'm curious
The ligatures part of this article gets me every time I re-read it. I think reading this article may have been the first time I realized that even large, well-funded projects are still done by people who are just regular humans, and sometimes settle for something that's good enough.
reply