"Today I want to talk about jujutsu, aka jj, which describes itself as being “a Git-compatible VCS that is both simple and powerful”. This is selling itself short. Picking up jj has been the best change I’ve made to my developer workflow in over a decade.
...
" -- Sandy Maguire
A while ago I wrote a small utility mailctl [1] to provide IMAP/SMTP clients
(like msmtp, fdm, isync, mutt, etc.) with the capabilities of renewal and
authorization of OAuth2 credentials. mailctl acts as a kind of smart
password manager. In particular, access token renewal happens automatically
in the background transparent to the user.
Since the program written in Haskell I also provided precompiled binaries to
spare my potential users, most of them not Haskellers, from the
complications of compiling it. The program have become moderately popular.
However, little I knew how big can of worms I opened by releasing it to the
public.
As many others pointed out, the two main reasons for the difficulties with
OAuth are 1) the OAuth "standard" is terrible complex 2) the service
providers' API documentation is an impenetrable obscure mess, always one
step behind of the current working system. I have the feeling that the
second one is not just negligence but might also be an intentional hidden
lock-in mechanism by forcing the use of the vendors' ever changing API
libraries.
There is a pending pull request partially addressing this:
https://github.com/helix-editor/helix/pull/4204
It would be nice to have a comprehensive and convincing explanation in the documentation which sheds lights at the advantages of helix's treating lines. Maybe we vim old timers could learn something new and make a more informed decision about switching to helix.
The article does not mention it but "symmetry breaking", a technique of exploiting symmetry to prune the search tree, can also be an important component of modern SAT solvers. In some sense one cannot do better than avoiding all symmetries in the given problem however that might come at significant computational price in practice.
Static symmetry breaking is performed as a kind of preprocessing, while dynamic symmetry breaking is interleaved with the search process. The static method has been used more but there are promising attempts to use dynamic symmetry breaking with CDCL solvers. See, for example, cosy (https://github.com/lip6/cosy) a symmetry breaking library which can be used with CDCL solvers.
I’ve researched many moon ago, breaking symmetries dynamically by adding clause conflicts. You take the search space build a graph with it and pass it to a Graph homomorphism detector. Then you create adicional clauses that break the symmetry by stopping possible assignments that limit the search space. The overhead of searching for symmetries and the exposition of dynamic clauses, make it a difficult trade off against the ruthless efficiency of traditional CDCL. Also many competitions used to occur every year to find out the fastest solver, these things have been optimized far beyond you would do for “normal” software.
> The overhead of searching for symmetries and the exposition of dynamic clauses, make it a difficult trade off against the ruthless efficiency of traditional CDCL
That is certainly true. However, once all the other components have been optimized the hell out of them (of course that is a never ending story itself but probably with diminishing returns) the optimization of the symmetry breaking parts will follow. As I mentioned before, efficiently dealing with isomorphism in the search space is the ultimate (theoretical) optimization.
This development can already been observed looking at the history of graph automorphism packages. For a very long time "naughty" was the only significant player than suddenly came "saucy" and "bliss". As far as I know, one of the motivations for the newcomers was to use them for symmetry breaking.
Not sure what exactly you mean by that but finding symmetries in a graph that is determining its automorphism group is not a simple problem. It is closely related to the graph isomorphism problem, about which we don't currently know whether it is solvable in polynomial time or it is NP-complete. On the other hand, we know that the Integer Linear Programing problem is NP-complete.