> Surely a better approach is to record the complete ancestry of every check-in but then fix the tool to show a "clean" history in those instances where a simplified display is desirable and edifying
From your link. The actual issue that people ought to be discussing in this comment section imo.
Why do we advocate destroying information/data about the dev process when in reality we need to solve a UI/display issue?
The amount of times in the last 15ish years I've solved something by looking back at the history and piecing together what happened (eg. refactor from A to B as part of a PR, then tweak B to eventually become C before getting it merged, but where there are important details that only resulted because of B, and you don't realize they are important until 2 years later) is high enough that I consider it very poor practice to remove the intermediate commits that actually track the software development process.
Because nobody cares about the dev process. The number of times I’ve looked back in the history and seen a branch with a series of twenty commits labeled “fix thing”, “oops”, “typo”, “remove thing I tried that didn’t work”, or just a chain of WIP WIP WIP WIP is useless, irritating, and pointless.
One commit per logical change. One merge per larger conceptual change. I will rewrite my actual dev process so that individual commits can be reviewed as small, independent PRs when possible, and so that bigger PRs can be reviewed commit-by-commit to understand the whole. Because I care about my reviewers, and because I want to review code like this.
Care about your goddamn craft, even just a little bit.
Isn't this just `--first-parent`? I think that should probably be the default in git. Maybe the only way this will happen is with a new SCM.
But the git authors are adamant that there's no convention for linearity, and somehow extended that to why there shouldn't be a "theirs" merge strategy to mirror "ours" (writing it out it makes even less sense, since "theirs" is what you'd want in a first-parent-linear repo, not "ours").
Yes, that is also my feeling. But comparing an interpreted language with a compiled one is not really fair.
Here is my quick benchmark. I refrain from using Python for most scripting/prototyping task but really like Janet [0] - here is a comparison for printing the current time in Unix epoch:
$ hyperfine --shell=none --warmup 2 "python3 -c 'import time;print(time.time())'" "janet -e '(print (os/time))'"
Benchmark 1: python3 -c 'import time;print(time.time())'
Time (mean ± σ): 22.3 ms ± 0.9 ms [User: 12.1 ms, System: 4.2 ms]
Range (min … max): 20.8 ms … 25.6 ms 126 runs
Benchmark 2: janet -e '(print (os/time))'
Time (mean ± σ): 3.9 ms ± 0.2 ms [User: 1.2 ms, System: 0.5 ms]
Range (min … max): 3.6 ms … 5.1 ms 699 runs
Summary
'janet -e '(print (os/time))'' ran
5.75 ± 0.39 times faster than 'python3 -c 'import time;print(time.time())''
concerning (1): I have no offline sync in place, all my emails stay on the server. The IMAP protocol has a decent server-side search included[0], combined with Gnus unified search syntax[1], I enjoy a hassle-free search experience.
gnus had some massive IMAP performance improvements a few years (probably close to a decade now) ago. Before that it was quite painful to use it on large mailboxes without a local imap - I used to sync that with offlineimap. When they had a massive issue moving from python2 to python3, and keeping that running on a modern distro started getting painful I tried it without local imap - and realised those improvements made things fast enough that you can run it on remote mailboxes, and even do so in your main emacs instance.
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
People are naturally conform _themselves_ to social expectations. You don't need to enforce anything. If you alter their perception of those expectations you can manipulate them into taking actions under false pretenses. It's a abstract form of lying. It's astroturfing at a "hyperscale."
The problem is this only seems to work best when the technique is used sparingly and the messages are delivered through multiple media avenues simultaneously. I think there's very weak returns particularly when multiple actors use the techniques at the same time in opposition to each other and limited to social media. Once people perceive a social stale mate they either avoid the issue or use their personal experiences to make their decisions.
>Once people perceive a social stale mate they either avoid
This is called the Firehose of Falsehood and it's a very effective way of killing public participation.
>use their personal experiences to make their decisions
Yes they can if they have them. But people use other peoples personal experiences when they don't, which means all you have to do is become their facebook friend and then tell them that 'trans mexican aliens from mars stole their job' and they'll start repeating it as a personal experience.
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
Expanding the "Gradual rollout" section is … interesting. I could hardly read it, let alone understand it straight away. For me a clear indicator that I am trying to ingest AI generated content. It's so embarrasing - is quality in documentation now a foreign concept in the age of AI, or does nobody simply care?
No one cares? I am confident someone got a promotion out of AI automating that. It is the metric being tracked in performance reviews. What is not tracked is how the readers experience it, so no point in putting effort into that.
Bottom line is employees do what they're incentivised to do.
Texinfo ultimately gets the @ convention from Brian Reid's Scribe[1], as developed at Carnegie Mellon during the late 70s and commercialized by Unilogic[2,3] in the 80s. Coincidentally, there was a close derivative of Scribe called Mint[4], also developed at Carnegie Mellon in the early 80s for the PERQ (an early personal workstation competing in the category of things like the Sun-1 or Lisp Machines).
The first and foremost interface of the kernel is the syscall interface aka the uapi. libc and other C libraries like liburing or libcap are downstream of that. Many syscalls still don't have wrappers in libc after years of use.
Yet for many syscalls there is an official library - in most cases a wrapper in libc, but especially io_uring is known to provide a C library that most applications ought to use instead of the raw syscalls.
"This is the io_uring library, liburing. liburing provides helpers to setup and
teardown io_uring instances, and also a simplified interface for
applications that don't need (or want) to deal with the full kernel
side implementation."
I read the article as saying that there's no official C library but unofficial ones do exist. Quote below, emphasis mine.
> A official c library doesn’t exist yet unfortunately, but there’s several out there you can try.
Also, it looks like there is more than zero support for C programs calling Landlock APIs. Even without a 3rd-party library you're not just calling syscall() with a magic number:
I don't understand what you mean. There's no "official" Rust, Haskell and Go APIs for this thing either. All libraries available seem to be just what some third party made available. There's also several C libraries, just none that have been officially endorsed by the Linux kernel team.
Go is famous for not needing libc and talking to the kernel. Rust and Haskell have communities that are very interested in safety and security, so they are earlier adopters.
For C, unofficial support apparently sufficed for now.
It's pretty subtle but it's referring to The C Library, libc.{a,so,dll,etc}. The library provided by your toolchain that supports the language.
Meaning glibc or musl or your favorite C library probably doesn't have this yet, but since the system calls are well defined you can use A C library (create your own header file using the _syscallN macro for example).
The lack of a C API should not stop any C developers from using it, hopefully. The wrapper libraries are relatively simple (i.e. https://codeberg.org/git-bruh/landbox) and both Rust and Go can expose a C FFI in case developers would rather link against a more "official" library instead.
There is no "liblandlock" or whatever, though there totally could be. The only reason Rust, Go, and Haskell have an easy-to-use API for this syscall is because someone bothered to implement a wrapper and publish it to the usual package managers. Whatever procedure distros use to add new libraries could just as easily be used to push a landlock library/header to use it in C.
Because C is not the primary interface language for kernel syscalls. There is no language specific primary interface as the syscall is the primary interface and it is language agnostic. This is one of Linux's great strengths, a stable syscall API that doesn't rely on a system library.
I also prefer Fossil to Git whenever possible, especially for small or personal projects.
[0] https://fossil-scm.org/home/doc/trunk/www/rebaseharm.md
reply