> Git provides file versioning services only, whereas Fossil adds an integrated wiki, ticketing & bug tracking, embedded documentation, technical notes, a web forum, and a chat service [...]
I like the idea of having all of those within the actual VCS, mostly because with Git you need centralized services like GitHub to provide that.
But I have to ask: Is it really a good idea? Seems like feature creep motivated by the wants of a single project (SQLite).
All of those could be (albeit awkwardly) backed with a git repo and a cron job.
Wiki? Just make a repo with a bunch of Markdown or this-week's-favorate-markup-language files.
Ticketing & bug tracking? Again, just a Markdown file for every ticket.
Embedded documentation & technical notes? Those are just special wiki pages with different attributes.
Forum and chat service? Do you want your VCS to do that? I get being able to hyperlink between files and conversations, but still.
I want it embedded in git simply to break the hold Github has. We have this fantastic distributed fault-tollerant dvcs that gets funneled though at worse 1 service, at best maybe 3 or 4.
I'd love to clone a repo and be able to view all the reasoning behind commits with the context of issues too. I know the commit message should cover this but sometimes it doesn't, or its too much context, or the context is limited to the opinion of the committer. I think all that information is relevant to projects and should have some chance to live alongside it. Stuff like git-bug exists, but then you still need participation from other people.
I really love the idea of radicle.xyz which is git + p2p + issues & patches (called `COB` - collaborative objects) all in your repo but getting the buy-in of the wider population seems extremely difficult, if not impossible. I think part of the attraction here specifically is nostalgia for me, it feels like its invoking the 90s/00s where it was all a big mesh network, information wanted to be free and you couldn't stop the signal.
Fossil also seems cool but the rest of the world is tied to git and I'm tied to jj now. I guess I really wish git themselves [sic] would push something forward, I think that's the only way it would really get broad acceptance. Forges could adopt it and try and special-sauce parts but still let you push/pull "COB"s.
> Stuff like git-bug exists, but then you still need participation from other people.
The plan is to 1) finish the webUI and 2) accept external auth (e.g. github OAuth). Once done, anyone can trivially host publicly their own forge and accept public contribution without any buy-in effort. Then, if user wants to go native they just install git-bug locally.
Whoa, git-bug is still being developed, awesome! I wonder how difficult it would be to add other tables to it (I cannot help but think about bug trackers as being a database with a frontend like Access, and many limitations...) — in particular to have Messages (for messages) and Discussions (for hierarchical list of message references). Now that git has reftable maybe this sort of abuse would actually work...
Assuming that by "table" you mean another "document type" ... pretty easily. There is a reusable CRDT like datastructure that you can use to define your own thing. You do that by defining the operations that can happen on it and what they do.
You don't have to handle the interaction with git or the conflict resolution.
"Feature creep" is hard to characterize. If the project needs and uses it, is it really "feature creep"?
Or, from Wikipedia, "The definition of what qualifies as "feature creep" varies among end users, where what is perceived as such by some users may be considered practical functionality by others." - https://en.wikipedia.org/wiki/Feature_creep
Hipp (the original SQLite author) also developed his own parser generator (Lemon) and his own editor (e). The former is also used by other projects.
Where do you store the different attributes? In the file-system? How do you manage consistency? Why put up with awkward cron solutions when you have a fully ACID database system right there to work with, which is portable across OSes, including ones which don't have cron?
If it helps any, don't think of it as a VCS but as an SCM system - one which includes version control.
I really like the idea of making your company's office a publicly accessible coworking space.
It does fit Kagi's slogan of 'Humanize the web', imagine if you could go into Google and chat with Larry Page.
European here. I've been to the US and holy mother of Jesus, you put sooooo much sugar into everything. I had to buy 'European' bread because your normal bread made my gums hurt, and even then that was the sweetest bread I've ever eaten.
Seriously, when your one large oreo shake has 2600 calories, no wonder your obesity rate is 35% and isn't slowing. Driving to the toilet instead of walking also doesn't help. Then your hospitals get overrun with preventable diseases and healthcare gets expensive. This isn't a 'caring' problem when getting fat is the only option for most people, the way most people life is specifically designed to make you obese.
A little hyperbole, but as an American, the idea that the average person in my country would rather drive somewhere rather than feel inconvenienced by a short walk is very accurate.
> Setting the value of href navigates to the provided URL [0]
It would have been caught because this API (setters) is impossible with Rust. At best, you'd have a .set_href(String).await, which would stop the thread until the location has been updated and therefore the value stabilized. At worst, you'd have a public .href variable, but because the setter pattern is impossible, you know there must be some process checking and scheduling updates.
Yeah, metric is cool and all, you can divide by ten and multiply by ten. But even better would be a hexadecimal system so that you could halve, third and quarter it. Plus it's n^2 so it's a perfect square \s
Instead of using a new PNG standard, I'd still rather use JPEG XL just because it has progressive decoding.
And you know, whilst looking like png, being as small as webp, supporting HDR and animations, and having even faster decoding speed.
Nothing really supports it. Latest Safari at least has support for it not feature-flagged or anything, but it doesn't support JPEG XL animations.
To be fair, nothing supports a theoretical PNG with Zstandard compression either. While that would be an obstacle to using PNG with Zstandard for a while, I kinda suspect it wouldn't be that long of a wait because many things that support PNG today also support Zstandard anyways, so it's not a huge leap for them to add Zstandard support to their PNG codecs. Adding JPEG-XL support is a relatively bigger ticket that has struggled to cross the finish line.
The thing I'm really surprised about is that you still can't use arithmetic coding with JPEG. I think the original reason is due to patents, but I don't think there have been active patents around that in years now.
Every new image codec faces this challenge. PNG + Zstandard would look similar. The ones that succeeded have managed it by piggybacking off a video codec, like https://caniuse.com/avif.
It is possible to polyfill an image format, this was done with FLIF¹². Not that it mean FLIF got the traction required to be used much anywhere outside its own demos…
It is also possible to detect support and provide different formats (so those supporting a new format get the benefit of small data transfer or other features) though this doesn't happen as it isn't usually an issue enough to warrant the extra complication.
Any polyfill requires JavaScript which is a dealbreaker for something as critical as image display, IMO.
Would be interesting if you could provide a decoder for <picture> tags to change the formats it supports but I don't see how you could do that without the browser first downloading the PNG/JPEG version first, thus negating any bandwidth benefits.
Depending on the site it might be practical to detect JS on first request and set a cookie to indicate that the new format (and polyfill) can be sent on subsequent requests instead of the more common format.
Or for a compiled-to-static site just use <NOSCRIPT> to let those with no JS enabled to go off to the version compiled without support/need for such things.
The zstd library is already included by most major browsers since it is a supported content encoding. Though I guess that does leave out Safari, but Safari should probably support Zstd for that, too. (I would've preferred that over Brotli, but oh well.)
Btw, could you 'just' use no compression on this level in the PNG, and let the transport compression handle it?
So on paper (and on disk) your PNG would be larger, but the number of bits transmitted would be almost the same as using Zstd?
EDIT: similarly, your filesystem could handle the on-disk compression.
This might work for something like PNG, but would work less well for something like JPG, where the compression part is much more domain specific to image data (as far as I am aware).
If there is a particular reason why that wouldn't work, I'm not aware of it. Seems like you would eat a very tiny cost for deflate literal overhead (a few bytes per 65,535 bytes of literal data?) but maybe you would wind up saving a few bytes from also compressing the headers.
Maybe for websites like Instagram that consist primarily of images. For everywhere else you have to amortize the cost of the download over the savings for the number of images in an average browsing session, as browsers segment the cache so you can't assume it will be available locally hot.
Actually I wonder, why in general not more decoders are just put into webassembly and are actually kept 'hot' on demand. Couldn't this also be an extension? Wouldn't that reduce the attack surface? I remember a time when most video and flash was a plugin. People would download the stuff. On the other hand using a public CDN at least would keep the traffic down for the one hosting the website.
Browser makers could easily let resources opt out of cache segmentation and then if everyone agreed on a CDN, or a resource integrity hash of a decoder, the wasm could be hot in the cache and used to extend the browser without Chrome needing to maintain it or worry about extra sandboxing.
They don't do it because they don't want people extending the web platform outside their control.
As I understand it JPEG XL has a lot of interest in medical imaging and is coming to DICOM. After it's in DICOM, whichever browser supports it best will rule hospitals.
That's because people have allowed the accumulation of power and control by Big Tech. Features in and capabilities of end user operating systems and browsers are gate kept by a handful of people in Big Tech. There is no free market there. Winners are picked by politics, not merit. Switching costs are extreme due to vendor lock in and carefully engineered friction.
The justification for WebP in Chrome over JPEG-XL was pure hand waving nonsense not technical merit. The reality is they would not dare cede any control or influence to the JPEG-XL working group.
Hell the EU is CONSIDERING mandatory attestation driven by whitelisted signed phone firmwares for certain essential activities. Freedom of choice is an illusion.
It's better when the way it works is "this is format is good, therefore we will support it" rather than "people support this format, therefore it is good".
Believe it or not, last I checked, many browsers and some other software (file managers, etc.) still couldn't do anything with JPEG files that have arithmetic coding. Apparently, although I haven't tried this myself, Adobe Photoshop also specifically doesn't support it.
Arithmetic coding decodes 1 bit at a time, usually in such a way that you can’t do two bits or more with SIMD instructions. So it will be slow and energy inefficient.
JPEG-XL is supported by a lot of the most important parts of the ecosystem (image editors and the major desktop operating systems) but it is a long way away from "everything". Browsers are the most major omission, but given their relative importance here it is not a small one. JPEG-XL is dead in the water until that problem can be resolved.
If Firefox is anything to go off of, the most rational explanation here seems to just be that adding a >100,000 line multi-threaded C++ codebase as a dependency for something that parses untrusted user inputs in a critical context like a web browser is undesirable at this point in the game (other codecs remain a liability but at least have seen extensive battle-testing and fuzzing over the years.) I reckon this is probably the main reason why there has been limited adoption so far. Apple seems not to mind too much, but I am guessing they've just put so much into sandboxing Webkit and image codecs already that they are relatively less concerned with whether or not there are memory safety issues in the codec... but that's just a guess.
Apple also adopted JPEG-XL across their entire software stack. It's supported throughout the OS, and by pretty much every application they develop, so I'm guessing they sunk a fair bit of time/money into hardening their codec
Only because it's both the reference encoder and decoder, and the encoder tends to be a lot more complex than the decoder. (Source: I have developed a partial JPEG XL decoder in the past, and it was <10K lines of C code.)
So far, it is following the same pattern. Safari adopts it, no one else does and then one day, Safari drops it. It is currently on step 2. When step 3 occurs, the cycle will be complete.
I don't like progressive decoding. Lots of people don't realize that it's a thing, and complain that my photo is blurry when it simply hasn't loaded fully yet. If it just loaded normally from top to bottom, it would be obvious whether it has loaded or not, and people will be able to more accurately judge the quality of the image. That's why I always save my JPEGs as baseline encoding.
Web browsers already have code in place for webp (lossless,vp8) and avif (av1, which also supports animation), as well as classic jpeg and png, and maybe also HEIC (hevc/h265)... what benefit do we have by adding yet another file format if all the use cases are already covered by the existing formats? That said, I do like JPEG-XL, but I also kind of understand the hesitation to adopt it too. I imagine if Apple's push for it continues, then it is just a matter of time to get supported more broadly in Chrome etc.
Avif is cute but using that as an excuse to not add jxl is a travesty. At the time either one of those could have been added, jxl should have been the choice.
The biggest benefit is that it's actually designed as an image format. All the video offshoots have massive compromises made so they can be decoded in 15 milliseconds in hardware.
The ability to shrink old jpegs with zero generation loss is pretty good too.
True, but https://news.ycombinator.com/item?id=44802079 presumably holds the opinion that APNG != PNG, so I mentioned PNG 3 to counteract that. Animated PNGs being officially PNG is recent.
Adam7 is interlacing, not progressive decoding (i.e. it cannot be used to selectively decode a part of the image). It also interacts extremely poorly with compression; there is no good reason to ever use it.
Jason Booth (the author) also has a YouTube channel talking about similar topics.
I really liked his 'Practical Optimizations' video: https://youtu.be/NAVbI1HIzCE
Right, Windows Hello requires it for facial auth, Windows itself does not. Hello still works, just you have to authenticate with a different method if the hardware isn't present.
AFAIK Pixel phones, including the Pixel 9, only use 2D images for face unlock. So it's definitely possible to reach mainstream quality with conventional cameras.
(Unless you'd argue that the face unlock found on Pixels is not passable either)
I don't know how Google does it, but it's possible to extract 3d information from a 2d sensor. You either need a variable focus or phase detection in the sensor.
Very cool. Yes, probably? I'll have to think about the relationship between image quality and the fidelity of the derived phase measurement, because it's not obvious how good a camera needs to be to be "good enough" for a secure system.
I guess the task is to design an experiment to test the error between phase inferred from intensity in a digital camera by Huygens-Steiner and a barycentric coordinate map And far more expensive photonic phase sensors.
Is (framerate-1 Hz) a limit, due to the discrete derivative being null for the first n points?
> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
IDK what happened with wave field cameras like the Lytro. They're possibly useful for face ID, too?
> I read somewhere that every augmentation is also an amputation. Progress in tech means we are constantly lobotomising a majority of the population.
Just thought about something:
There are a few sides to this.
There is innovation that just makes things easier but doesn't amputate, like typing machines vs word (took me a while to come up with an example, essentially just evolution).
Then there are things that are so old it's useless to know them. Like making butter, sure you can do it if you want to, might be fun, but in the grand scheme of things irrelevant.
Then there's stuff that is in decline but needed anyway. Like being able to read a book.
Maybe you could express this as a 2D graph, where X is how much people know it and Y is how much people need to know it.
That actually had substantial negative consequences that still go mostly unrecognized. MS Word was an improvement over typewriters - such a big improvement, in fact, that it allowed people to do things they previously wouldn't, including things they'd pay other people to do. This is actually a bigger deal than it sounds.
In short, office productivity tools allowed people to do things they'd otherwise delegate to others. You could write memos and reports yourself, instead of asking your secretary. You could manage your calendar and tasks yourself, instead of having someone else do it for you. You could design your own presentations quickly, instead of asking graphics department for help. And so on, and so on.
What happened then, all those specialized departments got downsized; you now have to write your own memos and manage your own calendar, because there are no secretaries around to do it for you. Same for graphics, same for communication, same for expense reporting, etc. Specialized roles disappeared, and along with them the salaries they commanded - but the work they did did not go away. Instead, it got spread out and distributed among everyone else, in tiny pieces - tiny enough, to not be visible in the books; also tiny enough to not benefit from specialization of labor.
Now apply this pattern to all other categories of software, especially anything that lets you do yourself the things you'd pay others to do before.
And then people are surprised why actual productivity gains didn't follow expectations at scale, despite all the computerization. That's because a chunk of expectations are just an accounting trick. Money saved on salaries gets counted; costs of the same work being less efficient and added to everyone else's workload (including non-linear effect of reducing focus) are not counted.
I'm forming an opinion that this exact problem is actually THE problem that people keep ignoring because it is compensated for by the burnout of people who care.
We talk a lot about enshitiffication. But we also build tools that do the work of a human specialist at (say) 85% of the quality of a human specialist (much faster and much more cheaply, that is the point).
These tools operate with or without time and effort from another non-specialist person. In the case that another human needs to do SOME work they didn't have to previously, this is effectively the definition of overwork in the presence of the same expectations.
This other person must now be the executor of whatever that work is because hiring a specialist in that area does not make financial sense.
And so gradually we erode the quality of all the intersectional work 15% (for example) at a time, while adding a small amount of work to the remaining (fewer) people.
Now maybe we can build a tool that is 99.9% the quality of a human for negligible cost. But it still doesn't take very many multiplications of 99.9% with itself to end up with shit.
Yes, and I feel stupid every time I have to do a task I am not specialised in, purely because I have to educate myself all over again from the basics to get the job done. Like fixing a leaking tap. I know theoretically where the issue might lie, but by God it takes an eternity to fix because I don't have the right tools lying around and the dexterity required hasn't been built to do it correctly.
I like the idea of having all of those within the actual VCS, mostly because with Git you need centralized services like GitHub to provide that.
But I have to ask: Is it really a good idea? Seems like feature creep motivated by the wants of a single project (SQLite).
All of those could be (albeit awkwardly) backed with a git repo and a cron job. Wiki? Just make a repo with a bunch of Markdown or this-week's-favorate-markup-language files. Ticketing & bug tracking? Again, just a Markdown file for every ticket. Embedded documentation & technical notes? Those are just special wiki pages with different attributes. Forum and chat service? Do you want your VCS to do that? I get being able to hyperlink between files and conversations, but still.
reply