Hacker Newsnew | past | comments | ask | show | jobs | submit | e-topy's commentslogin

> Git provides file versioning services only, whereas Fossil adds an integrated wiki, ticketing & bug tracking, embedded documentation, technical notes, a web forum, and a chat service [...]

I like the idea of having all of those within the actual VCS, mostly because with Git you need centralized services like GitHub to provide that.

But I have to ask: Is it really a good idea? Seems like feature creep motivated by the wants of a single project (SQLite).

All of those could be (albeit awkwardly) backed with a git repo and a cron job. Wiki? Just make a repo with a bunch of Markdown or this-week's-favorate-markup-language files. Ticketing & bug tracking? Again, just a Markdown file for every ticket. Embedded documentation & technical notes? Those are just special wiki pages with different attributes. Forum and chat service? Do you want your VCS to do that? I get being able to hyperlink between files and conversations, but still.


I want it embedded in git simply to break the hold Github has. We have this fantastic distributed fault-tollerant dvcs that gets funneled though at worse 1 service, at best maybe 3 or 4.

I'd love to clone a repo and be able to view all the reasoning behind commits with the context of issues too. I know the commit message should cover this but sometimes it doesn't, or its too much context, or the context is limited to the opinion of the committer. I think all that information is relevant to projects and should have some chance to live alongside it. Stuff like git-bug exists, but then you still need participation from other people.

I really love the idea of radicle.xyz which is git + p2p + issues & patches (called `COB` - collaborative objects) all in your repo but getting the buy-in of the wider population seems extremely difficult, if not impossible. I think part of the attraction here specifically is nostalgia for me, it feels like its invoking the 90s/00s where it was all a big mesh network, information wanted to be free and you couldn't stop the signal.

Fossil also seems cool but the rest of the world is tied to git and I'm tied to jj now. I guess I really wish git themselves [sic] would push something forward, I think that's the only way it would really get broad acceptance. Forges could adopt it and try and special-sauce parts but still let you push/pull "COB"s.


> Stuff like git-bug exists, but then you still need participation from other people.

The plan is to 1) finish the webUI and 2) accept external auth (e.g. github OAuth). Once done, anyone can trivially host publicly their own forge and accept public contribution without any buy-in effort. Then, if user wants to go native they just install git-bug locally.


Whoa, git-bug is still being developed, awesome! I wonder how difficult it would be to add other tables to it (I cannot help but think about bug trackers as being a database with a frontend like Access, and many limitations...) — in particular to have Messages (for messages) and Discussions (for hierarchical list of message references). Now that git has reftable maybe this sort of abuse would actually work...

Assuming that by "table" you mean another "document type" ... pretty easily. There is a reusable CRDT like datastructure that you can use to define your own thing. You do that by defining the operations that can happen on it and what they do. You don't have to handle the interaction with git or the conflict resolution.

"Feature creep" is hard to characterize. If the project needs and uses it, is it really "feature creep"?

Or, from Wikipedia, "The definition of what qualifies as "feature creep" varies among end users, where what is perceived as such by some users may be considered practical functionality by others." - https://en.wikipedia.org/wiki/Feature_creep

Hipp (the original SQLite author) also developed his own parser generator (Lemon) and his own editor (e). The former is also used by other projects.

Where do you store the different attributes? In the file-system? How do you manage consistency? Why put up with awkward cron solutions when you have a fully ACID database system right there to work with, which is portable across OSes, including ones which don't have cron?

If it helps any, don't think of it as a VCS but as an SCM system - one which includes version control.


I prefer not to have strict version control over the entire state of all work tickets. It sort of adds friction.

I really like the idea of making your company's office a publicly accessible coworking space. It does fit Kagi's slogan of 'Humanize the web', imagine if you could go into Google and chat with Larry Page.


This works in RustRover as well! Super useful.


Rust's type system specifically facilitates more powerful tools: https://github.com/willcrichton/flowistry


European here. I've been to the US and holy mother of Jesus, you put sooooo much sugar into everything. I had to buy 'European' bread because your normal bread made my gums hurt, and even then that was the sweetest bread I've ever eaten.

Seriously, when your one large oreo shake has 2600 calories, no wonder your obesity rate is 35% and isn't slowing. Driving to the toilet instead of walking also doesn't help. Then your hospitals get overrun with preventable diseases and healthcare gets expensive. This isn't a 'caring' problem when getting fat is the only option for most people, the way most people life is specifically designed to make you obese.


Is this hyperbole or do Americans actually drive rather than walk to toilets? Not being hostile, genuinely curious.


A little hyperbole, but as an American, the idea that the average person in my country would rather drive somewhere rather than feel inconvenienced by a short walk is very accurate.


> Setting the value of href navigates to the provided URL [0]

It would have been caught because this API (setters) is impossible with Rust. At best, you'd have a .set_href(String).await, which would stop the thread until the location has been updated and therefore the value stabilized. At worst, you'd have a public .href variable, but because the setter pattern is impossible, you know there must be some process checking and scheduling updates.

[0]: https://developer.mozilla.org/en-US/docs/Web/API/Location/hr...


Yeah, metric is cool and all, you can divide by ten and multiply by ten. But even better would be a hexadecimal system so that you could halve, third and quarter it. Plus it's n^2 so it's a perfect square \s


Instead of using a new PNG standard, I'd still rather use JPEG XL just because it has progressive decoding. And you know, whilst looking like png, being as small as webp, supporting HDR and animations, and having even faster decoding speed.

https://dennisforbes.ca/articles/jpegxl_just_won_the_image_w...


JPEG XL definitely has advantages over PNG but there is one serious seemingly insurmountable obstacle:

https://caniuse.com/jpegxl

Nothing really supports it. Latest Safari at least has support for it not feature-flagged or anything, but it doesn't support JPEG XL animations.

To be fair, nothing supports a theoretical PNG with Zstandard compression either. While that would be an obstacle to using PNG with Zstandard for a while, I kinda suspect it wouldn't be that long of a wait because many things that support PNG today also support Zstandard anyways, so it's not a huge leap for them to add Zstandard support to their PNG codecs. Adding JPEG-XL support is a relatively bigger ticket that has struggled to cross the finish line.

The thing I'm really surprised about is that you still can't use arithmetic coding with JPEG. I think the original reason is due to patents, but I don't think there have been active patents around that in years now.


Every new image codec faces this challenge. PNG + Zstandard would look similar. The ones that succeeded have managed it by piggybacking off a video codec, like https://caniuse.com/avif.


It is possible to polyfill an image format, this was done with FLIF¹². Not that it mean FLIF got the traction required to be used much anywhere outside its own demos…

It is also possible to detect support and provide different formats (so those supporting a new format get the benefit of small data transfer or other features) though this doesn't happen as it isn't usually an issue enough to warrant the extra complication.

----

[1] Main info: https://flif.info/

[2] Demo with polyfill: https://uprootlabs.github.io/poly-flif/


Any polyfill requires JavaScript which is a dealbreaker for something as critical as image display, IMO.

Would be interesting if you could provide a decoder for <picture> tags to change the formats it supports but I don't see how you could do that without the browser first downloading the PNG/JPEG version first, thus negating any bandwidth benefits.


Depending on the site it might be practical to detect JS on first request and set a cookie to indicate that the new format (and polyfill) can be sent on subsequent requests instead of the more common format.

Or for a compiled-to-static site just use <NOSCRIPT> to let those with no JS enabled to go off to the version compiled without support/need for such things.


Why would PNG + ZStandard have a harder time than AVIF? In practice, AVIF needs more new code than PNG + ZStandard would.


I'm just guessing, but bumping a library version to include new code cam integrating a separate library might be the differentiating factor.


The zstd library is already included by most major browsers since it is a supported content encoding. Though I guess that does leave out Safari, but Safari should probably support Zstd for that, too. (I would've preferred that over Brotli, but oh well.)


Btw, could you 'just' use no compression on this level in the PNG, and let the transport compression handle it?

So on paper (and on disk) your PNG would be larger, but the number of bits transmitted would be almost the same as using Zstd?

EDIT: similarly, your filesystem could handle the on-disk compression.

This might work for something like PNG, but would work less well for something like JPG, where the compression part is much more domain specific to image data (as far as I am aware).


If there is a particular reason why that wouldn't work, I'm not aware of it. Seems like you would eat a very tiny cost for deflate literal overhead (a few bytes per 65,535 bytes of literal data?) but maybe you would wind up saving a few bytes from also compressing the headers.


5 bytes per block or 0.000076 overhead.


zstd compresses less, so you wait a bit more for your data


> but there is one serious seemingly insurmountable obstacle

It can be surmounted with WebAssembly: https://github.com/niutech/jxl.js/

Single thread demo: https://niutech.github.io/jxl.js/

Multithread demo: https://niutech.github.io/jxl.js/multithread/


Maybe for websites like Instagram that consist primarily of images. For everywhere else you have to amortize the cost of the download over the savings for the number of images in an average browsing session, as browsers segment the cache so you can't assume it will be available locally hot.


Actually I wonder, why in general not more decoders are just put into webassembly and are actually kept 'hot' on demand. Couldn't this also be an extension? Wouldn't that reduce the attack surface? I remember a time when most video and flash was a plugin. People would download the stuff. On the other hand using a public CDN at least would keep the traffic down for the one hosting the website.


Browser makers could easily let resources opt out of cache segmentation and then if everyone agreed on a CDN, or a resource integrity hash of a decoder, the wasm could be hot in the cache and used to extend the browser without Chrome needing to maintain it or worry about extra sandboxing.

They don't do it because they don't want people extending the web platform outside their control.


As I understand it JPEG XL has a lot of interest in medical imaging and is coming to DICOM. After it's in DICOM, whichever browser supports it best will rule hospitals.


Ha, yeah right, hospitals are still running IE11 in some places in the US


Yes, you're right. It'd be lovely to not need to install a DICOM viewer plugin thing.


That's because people have allowed the accumulation of power and control by Big Tech. Features in and capabilities of end user operating systems and browsers are gate kept by a handful of people in Big Tech. There is no free market there. Winners are picked by politics, not merit. Switching costs are extreme due to vendor lock in and carefully engineered friction.

The justification for WebP in Chrome over JPEG-XL was pure hand waving nonsense not technical merit. The reality is they would not dare cede any control or influence to the JPEG-XL working group.

Hell the EU is CONSIDERING mandatory attestation driven by whitelisted signed phone firmwares for certain essential activities. Freedom of choice is an illusion.


Webp is a lot older than jpg xl, right?


It was behind a feature flag and then removed? I guess that's where the skepticism comes from


It's also because supporting features is work that takes time away from other bug fixes and other features


> I kinda suspect it wouldn't be that long of a wait

Yeah... guess again. It took Chrome 13 years to support animated PNG - the last major change to PNG.


APNG wasn't part of PNG itself until very recently, so I'd argue it's kind-of neither here nor there.


Maybe they were focused on Webp?


It's better when the way it works is "this is format is good, therefore we will support it" rather than "people support this format, therefore it is good".


> The thing I'm really surprised about is that you still can't use arithmetic coding with JPEG.

I was under the impression libjpeg added support in 2009 (in v7). I'd assume most things support it by now.


Believe it or not, last I checked, many browsers and some other software (file managers, etc.) still couldn't do anything with JPEG files that have arithmetic coding. Apparently, although I haven't tried this myself, Adobe Photoshop also specifically doesn't support it.


Arithmetic coding decodes 1 bit at a time, usually in such a way that you can’t do two bits or more with SIMD instructions. So it will be slow and energy inefficient.


Deompression is limited by memory bandwidth IME, which means that more efficient compression is (almost) always more power-efficient too.

(I don't have numbers for this, but it was generally agreed by the x264 team at one point.)


this isn't necessarily true. zstd uses an ans which is a type of arithmetic coding which is very efficient to decode


Nice to learn about. It’s good to know the field has progressed, however the context focused on JPEG, where my point does apply.


> Nothing really supports it.

Everything supports it, except web browsers.


JPEG-XL is supported by a lot of the most important parts of the ecosystem (image editors and the major desktop operating systems) but it is a long way away from "everything". Browsers are the most major omission, but given their relative importance here it is not a small one. JPEG-XL is dead in the water until that problem can be resolved.

If Firefox is anything to go off of, the most rational explanation here seems to just be that adding a >100,000 line multi-threaded C++ codebase as a dependency for something that parses untrusted user inputs in a critical context like a web browser is undesirable at this point in the game (other codecs remain a liability but at least have seen extensive battle-testing and fuzzing over the years.) I reckon this is probably the main reason why there has been limited adoption so far. Apple seems not to mind too much, but I am guessing they've just put so much into sandboxing Webkit and image codecs already that they are relatively less concerned with whether or not there are memory safety issues in the codec... but that's just a guess.


Apple also adopted JPEG-XL across their entire software stack. It's supported throughout the OS, and by pretty much every application they develop, so I'm guessing they sunk a fair bit of time/money into hardening their codec


> >100,000 line multi-threaded C++

W. T. F. Yeah, if this is the state of the reference implementation, then I'm against JPEG-XL just on moral grounds.


Only because it's both the reference encoder and decoder, and the encoder tends to be a lot more complex than the decoder. (Source: I have developed a partial JPEG XL decoder in the past, and it was <10K lines of C code.)


> reference

They aren't going to give you two problems to solve/consider: clever code and novel design.


On Linux the browser can and should link dynamically against the system library for image formats.


webp still got a vulnerability


JPEG XL support will probably resemble JPEG 2000 support after enough time has passed:

https://caniuse.com/jpeg2000


Except JXL has actual value unlike J2K which wasn't that much more efficient than JPEG and much slower.


So far, it is following the same pattern. Safari adopts it, no one else does and then one day, Safari drops it. It is currently on step 2. When step 3 occurs, the cycle will be complete.


except DNG, ProRAW, DICOM, GDAL, TIFF, Apple's and Microsoft's operating systems, Linux distros, and Windows support JPEG XL

otherwise the same


Look at the caniuse.com links. It is following the same pattern as JPEG 2000 in those charts. That is a fact.


>he thing I'm really surprised about is that you still can't use arithmetic coding with JPEG

Or AVC YUV44 with Firefox (https://bugzilla.mozilla.org/show_bug.cgi?id=1368063). Fortunately, AV1 YUV444 seems to be supported.


PNG with ZStandard or Brotli is much worse than WebP lossless.


You can use a polyfill.


I don't like progressive decoding. Lots of people don't realize that it's a thing, and complain that my photo is blurry when it simply hasn't loaded fully yet. If it just loaded normally from top to bottom, it would be obvious whether it has loaded or not, and people will be able to more accurately judge the quality of the image. That's why I always save my JPEGs as baseline encoding.


Is there no good progress indicator that gets you the best of both worlds - instant image recognition and the ability to wait and get better quality?


Web browsers already have code in place for webp (lossless,vp8) and avif (av1, which also supports animation), as well as classic jpeg and png, and maybe also HEIC (hevc/h265)... what benefit do we have by adding yet another file format if all the use cases are already covered by the existing formats? That said, I do like JPEG-XL, but I also kind of understand the hesitation to adopt it too. I imagine if Apple's push for it continues, then it is just a matter of time to get supported more broadly in Chrome etc.


Avif is cute but using that as an excuse to not add jxl is a travesty. At the time either one of those could have been added, jxl should have been the choice.

The biggest benefit is that it's actually designed as an image format. All the video offshoots have massive compromises made so they can be decoded in 15 milliseconds in hardware.

The ability to shrink old jpegs with zero generation loss is pretty good too.


The benefits are better quality, higher speed, and more features like progressive decoding. JXL is a single multi-trick pony unlike the others

Good summary https://cloudinary.com/blog/time_for_next_gen_codecs_to_deth...


Doesn't PNG have progressive decoding? I.e. adam7 algorithm


It does, using Adam7: https://en.wikipedia.org/wiki/Adam7_algorithm

The recently released PNG 3 also supports HDR and animations: https://www.w3.org/TR/png-3/


> The recently released PNG 3 also supports HDR and animations: https://www.w3.org/TR/png-3/

APNG isn't recent so much as the specs were merged together. APNG will be 21 years old in a few weeks.


True, but https://news.ycombinator.com/item?id=44802079 presumably holds the opinion that APNG != PNG, so I mentioned PNG 3 to counteract that. Animated PNGs being officially PNG is recent.


Adam7 is interlacing, not progressive decoding (i.e. it cannot be used to selectively decode a part of the image). It also interacts extremely poorly with compression; there is no good reason to ever use it.


Jason Booth (the author) also has a YouTube channel talking about similar topics. I really liked his 'Practical Optimizations' video: https://youtu.be/NAVbI1HIzCE


Thanks for the cool video link!


Apple's Face ID uses what is essentially a 3D camera, a simple 2D color camera cannot compare to that in terms of accuracy.


Windows also uses infrared LEDs to light your face and prevent a flat photo from being recognised as a face.


Windows is an operating system and does not have dependence on specific hardware being present.


Incorrect. Windows Hello uses special hardware.


Right, Windows Hello requires it for facial auth, Windows itself does not. Hello still works, just you have to authenticate with a different method if the hardware isn't present.


How little is your time worth that you spend it making pedantic little correctioms like this?


Bored at work lol


There are definitely webcams that work with Windows Hello, and those that don't.


Apple has clearly done a lot of work in this space and have decided to retain Touch ID on Macbooks. I think this is fairly instructive.


That was primarily because the face id sensor stack is too thick to fit in the laptop lid


The point being that they think they need those sensors in order to create a secure system.


AFAIK Pixel phones, including the Pixel 9, only use 2D images for face unlock. So it's definitely possible to reach mainstream quality with conventional cameras.

(Unless you'd argue that the face unlock found on Pixels is not passable either)


I don't know how Google does it, but it's possible to extract 3d information from a 2d sensor. You either need a variable focus or phase detection in the sensor.


It is possible to infer phase from second order intensity via the Huygens-Steiner theorem for rigid body rotation, FWIU: https://news.ycombinator.com/item?id=42663342 .. https://news.ycombinator.com/item?id=37226121#37226160

Doesn't that mean that any camera can be used to infer phase (and thus depth for face ID, which is a high risk application)?

> variable focus

A light field camera (with "infinite" focus) would also work.


Very cool. Yes, probably? I'll have to think about the relationship between image quality and the fidelity of the derived phase measurement, because it's not obvious how good a camera needs to be to be "good enough" for a secure system.

Light field? I remember Lytro! Such cool technology that never found its niche. https://en.wikipedia.org/wiki/Lytro

Is anybody making a successor product?


I guess the task is to design an experiment to test the error between phase inferred from intensity in a digital camera by Huygens-Steiner and a barycentric coordinate map And far more expensive photonic phase sensors.

Is (framerate-1 Hz) a limit, due to the discrete derivative being null for the first n points?

Fortunately this article explained the implications of said breakthrough; "Physicists use a 350-year-old theorem [Huygens-Steiner] to reveal new properties of light waves" https://phys.org/news/2023-08-physicists-year-old-theorem-re... :

> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.

IDK what happened with wave field cameras like the Lytro. They're possibly useful for face ID, too?

"SAR wavefield". There's a thing.

From https://news.ycombinator.com/item?id=32819838 :

> Wave Field recordings are probably [would probably be] the most complete known descriptions of the brain and its nonlinear fields?


> I read somewhere that every augmentation is also an amputation. Progress in tech means we are constantly lobotomising a majority of the population.

Just thought about something:

There are a few sides to this. There is innovation that just makes things easier but doesn't amputate, like typing machines vs word (took me a while to come up with an example, essentially just evolution). Then there are things that are so old it's useless to know them. Like making butter, sure you can do it if you want to, might be fun, but in the grand scheme of things irrelevant. Then there's stuff that is in decline but needed anyway. Like being able to read a book.

Maybe you could express this as a 2D graph, where X is how much people know it and Y is how much people need to know it.


> typing machines vs word

That actually had substantial negative consequences that still go mostly unrecognized. MS Word was an improvement over typewriters - such a big improvement, in fact, that it allowed people to do things they previously wouldn't, including things they'd pay other people to do. This is actually a bigger deal than it sounds.

In short, office productivity tools allowed people to do things they'd otherwise delegate to others. You could write memos and reports yourself, instead of asking your secretary. You could manage your calendar and tasks yourself, instead of having someone else do it for you. You could design your own presentations quickly, instead of asking graphics department for help. And so on, and so on.

What happened then, all those specialized departments got downsized; you now have to write your own memos and manage your own calendar, because there are no secretaries around to do it for you. Same for graphics, same for communication, same for expense reporting, etc. Specialized roles disappeared, and along with them the salaries they commanded - but the work they did did not go away. Instead, it got spread out and distributed among everyone else, in tiny pieces - tiny enough, to not be visible in the books; also tiny enough to not benefit from specialization of labor.

Now apply this pattern to all other categories of software, especially anything that lets you do yourself the things you'd pay others to do before.

And then people are surprised why actual productivity gains didn't follow expectations at scale, despite all the computerization. That's because a chunk of expectations are just an accounting trick. Money saved on salaries gets counted; costs of the same work being less efficient and added to everyone else's workload (including non-linear effect of reducing focus) are not counted.


> but the work they did did not go away

I'm forming an opinion that this exact problem is actually THE problem that people keep ignoring because it is compensated for by the burnout of people who care.

We talk a lot about enshitiffication. But we also build tools that do the work of a human specialist at (say) 85% of the quality of a human specialist (much faster and much more cheaply, that is the point).

These tools operate with or without time and effort from another non-specialist person. In the case that another human needs to do SOME work they didn't have to previously, this is effectively the definition of overwork in the presence of the same expectations.

This other person must now be the executor of whatever that work is because hiring a specialist in that area does not make financial sense.

And so gradually we erode the quality of all the intersectional work 15% (for example) at a time, while adding a small amount of work to the remaining (fewer) people.

Now maybe we can build a tool that is 99.9% the quality of a human for negligible cost. But it still doesn't take very many multiplications of 99.9% with itself to end up with shit.


Yes, and I feel stupid every time I have to do a task I am not specialised in, purely because I have to educate myself all over again from the basics to get the job done. Like fixing a leaking tap. I know theoretically where the issue might lie, but by God it takes an eternity to fix because I don't have the right tools lying around and the dexterity required hasn't been built to do it correctly.


Interesting point, I'll try and plot such a graph!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: