Wait... didn't they advertise some utterly perfect blockchain oracle that could resolve disputes in the most objective way possible? Is Polymarket overriding that, or is that system what is inevitably failing (due to it being impossible to solve the oracle problem, despite blockchain advocates' claims to the contrary)?
That's a good point. I think the assertion that Polymarket is actively doing (or not doing) anything here is at least incomplete. As far as I know, resolutions are still via UMA, which is (at least formally) an independent entity. No idea how big overlaps in personnel and control are in reality.
When Chrome first came out in 2008, it was noticeably faster than any competitors. Users with only moderate tech knowledge were switching in droves because it was faster. Part of this was that it had a process-per-tab model properly making use of multi core CPUs for the first time, but much of it was because V8 was fast.
The gap is not so big these days. JavaScriptCore, Spidermonkey, and V8 are all competent.
Not if you're already running servers and server applications. If you already have patterns for running and deploying server software, an SSG requires an extra preprocessing step to generate the HTML for the server.
If you don't use an SSG, this step is done by virtue of the server running.
People complain about Github not allowing you to turn off issues and pull requests entirely, but I've always seen it as a positive. It means the truth about code quality, potential caveats, and better forked revisions can disseminate freely even when the author disappears. It becomes a spamfest at times, but is still probably a net positive for the ecosystem.
That being said, as long as you still have the discussion tab, auto-deleting all issues by default is not a big deal.
Java does fine on memory safety, but does not do great on null safety (and overall invariant protection / "make invalid states unrepresentative" ethos), has difficult to harden concurrency primitives, and won't be adopted in many scenarios due to runtime cost and performance pitfalls. Future Valhalla work fixes some of these issues, but leaves many things spiky.
I dislike Java's abstraction-through-indirection approach, which is related to the non-representable invalid states you mention. But I think it's more of a matter of taste.
Somewhat controversially, I think Java is actually doing fine on null safety: it uses the same approach for it as it does for array index safety. The latter is a problem for any language with arrays without dependent types: out-of-bounds accesses (if detectable at all) result in exceptions (often named differently because exceptions are controversial).
Java's advantage here is that it doesn't pretend that it doesn't have exceptions. I think it's quite rare to take down a service because handling a specific request resulted in an exception. Catching exceptions, logging them, and continuing seems to be rather common. It's not like Rust and Go, where unexpected panics in libraries are often treated as security vulnerabilities because panics are expected to take down entire services, instead of just stopping processing of the current request.
I'm not talking about null safety in the sense of null pointers. Null pointers and out of bound pointers are still in the realm of memory safety, which of course Java has solved for the most part.
Proper null safety (sometimes called void safety) is to actually systematically eliminate null values, to force in the type system a path of either handling or explicitly crashing. This is what many newer expressive multi-paradigm languages have been able to achieve (and something functional programming languages have been doing for ages), but remains out of reach for Java. Java does throw an exception on errant null value access, but allows the programmer to forget to handle it by making it a `RuntimeException`, and by the time you might try to handle it, you've lost all of the semantics of what went wrong - what value was actually missing and what a missing value truly means in the domain.
> Catching exceptions, logging them, and continuing seems to be rather common. It's not like Rust and Go, where unexpected panics in libraries are often treated as security vulnerabilities because panics are expected to take down entire services, instead of just stopping processing of the current request.
Comparing exceptions to panics is a category error. Rust for example has great facilities for bubbling up errors as values. Part of why you want to avoid panicking so much is that you don't need to do it, because it is just as easy to create structured errors that can be ignored by the consumer if needed. Java exceptions should be compared to how errors are actually handled in Rust code, it turns out they end up being fairly similar in what you get out of it.
Java introduced Optional to remove nulls. It also introduced a bunch of things to make it behave like functional languages. You can use records for immutable data, sealed interfaces for domain states, you can switch on the sealed interface for pattern matching, use the sealed interfaces + consumers or a command pattern to remove exception handling and have errors as values.
using an instance of a sealed class in a switch expression also has the nice property that the compiler will produce an error if the cases are incomplete (and as such there's also no need for a default case). So a good case for the "make invalid states unrepresentable" argument.
I understood what you meant. I just disagree about priorities. Conceptually, every array access (absent dependent types) can produce a null value because the index might be out of bounds. Languages that eliminate null values in other areas typically fail to deal with the array indexing issue at the type level, which seems at least as prevalent in real-world code as null pointer deferences, if not more so.
Regarding the category error, on many platforms, Rust panics use the same underlying implementation mechanism as C++ exceptions. In general, Rust library code is expected to be panic-safe. Some well-known Rust tools use panics for control flow (in the same way one would abuse exceptions). The standard test framework depends on recoverable panics, if I recall correctly. The Rust language gives exceptions a different name and does not provide convenient syntax for handling them, but it still has to deal with the baggage associated with them, and so do Rust library authors who do not want to place restrictions on how their code is reused. It's not necessarily a bad approach, to be clear: avoiding out-of-bounds indexing errors completely is hard.
There's nothing stopping you from writing code that is completely functional and devoid of nulls these days. It's just that java obviously still allows nulls if someone needs to use them (partly for interoperability with legacy code)
But if you're going to argue about the mere presence of null being problematic, you might as well complain about the ability to use "unsafe" code in Rust too.
It's at the kernel level. Each process has its own current working directory. On Linux, these CWD values are exposed at `/proc/[...]/cwd`. This value affects the resolution of relative paths in filesystem operations at a syscall level.
Interesting. I've been using Unix systems for 30 years and never noticed this.
On my Fedora system, /usr/bin/cd is just a shell script that invokes the shell builtin:
#!/usr/bin/sh
builtin cd "$@"
I suppose it could be useful for testing whether a directory exists with search permissions for the current user safely in a multithreaded program that relies on the current directory remaining constant.
Yeah, it's typically a shell built-in since you'd want cd to change the cwd for the shell process itself. Child processes (like commands being executed in the shell) can inherit the parent shell's cwd but AFAIK the opposite isn't true.
In the near future I fear there may be laws about “LLMing while drunk” after enough rogue LLM agents vibe coded while drunk cause widespread havoc. You know folks harassing exs or trying to hack military depos to get a tank.
reply