Hacker Newsnew | past | comments | ask | show | jobs | submit | joatmon-snoo's commentslogin

The argument for not using electric sharpeners is that they (1) cut down the lifetime of your knife substantially and (2) they do a mediocre job of sharpening.

Mechanically, it's just high-abrasive motorized spinning discs at preset angles. So rather than getting a good edge by taking a few microns of material off by doing it manually, you get an OK edge by taking 0.2mm off at a time. (If 0.2mm doesn't sound like a lot, think about how many mm wide your knife is.)

---

I'm personally 50-50 on this advice: most people don't sharpen their knives at all, and I think people are better off getting 10 OK years out of a knife than 50 terrible years out of it.

I'm also not willing to learn how to use a whetstone, so I landed in the middle on this: https://worksharptools.com/products/precision-adjust-knife-s...


I still sharpen my knives on a whetstone, but given the general cost trajectory of most manufactured items, I've decided that I'm okay if I wear out my knives. Buying a new chef's knife in 10 years is basically free on a per-day-of-use basis.

(I say that, but I'm still using knives that mostly range from 25-50 years old, but some didn't get sharpened enough when they belonged to our parents and grandparents.)


I landed on using a diamond stone with 300 grit and 1000 grit. Unlike whetstones they never need to be flattened. I just use one of those cheap plastic angle guides. After a bit of practice you will learn to hold the angle well enough. Finish with a leather strop and some polishing compound and I can keep my knives shaving-sharp with only a few minutes effort before I cook.


There's a lot of tooling built on static binaries:

- google-wide profiling: the core C++ team can collect data on how much of fleet CPU % is spent in absl::flat_hash_map re-bucketing (you can find papers on this publicly)

- crashdump telemetry

- dapper stack trace -> codesearch

Borg literally had to pin the bash version because letting the bash version float caused bugs. I can't imagine how much harder debugging L7 proxy issues would be if I had to follow a .so rabbit hole.

I can believe shrinking binary size would solve a lot of problems, and I can imagine ways to solve the .so versioning problem, but for every problem you mention I can name multiple other probable causes (eg was startup time really execvp time, or was it networked deps like FFs).


We are missing tooling to partition a huge binary into a few larger shared objects.

As my https://maskray.me/blog/2023-05-14-relocation-overflow-and-c... (linked by author, thanks! But I maintain lld/ELF instead of "wrote" it - it's engineer work of many folks)

Quoting the relevant paragraphs below:

## Static linking

In this section, we will deviate slightly from the main topic to discuss static linking. By including all dependencies within the executable itself, it can run without relying on external shared objects. This eliminates the potential risks associated with updating dependencies separately.

Certain users prefer static linking or mostly static linking for the sake of deployment convenience and performance aspects:

* Link-time optimization is more effective when all dependencies are known. Providing shared object information during executable optimization is possible, but it may not be a worthwhile engineering effort.

* Profiling techniques are more efficient dealing with one single executable.

* The traditional ELF dynamic linking approach incurs overhead to support [symbol interposition](https://maskray.me/blog/2021-05-16-elf-interposition-and-bsy...).

* Dynamic linking involves PLT and GOT, which can introduce additional overhead. Static linking eliminates the overhead.

* Loading libraries in the dynamic loader has a time complexity `O(|libs|^2*|libname|)`. The existing implementations are designed to handle tens of shared objects, rather than a thousand or more.

Furthermore, the current lack of techniques to partition an executable into a few larger shared objects, as opposed to numerous smaller shared objects, exacerbates the overhead issue.

In scenarios where the distributed program contains a significant amount of code (related: software bloat), employing full or mostly static linking can result in very large executable files. Consequently, certain relocations may be close to the distance limit, and even a minor disruption (e.g. add a function or introduce a dependency) can trigger relocation overflow linker errors.


> We are missing tooling to partition a huge binary into a few larger shared objects

Those who do not understand dynamic linking are doomed to reinvent it.


There’s no way my proxy binary actually requires 25GB of code, or even the 3GB it is. Sounds to me like the answer is a tree shaker.


Google implemented the C++ equivalent of a tree shaker in their build system around 2009.


the front-end services to be "fast" AFAIK probably include nearly all the services you need to avoid hops -- so you can't really shake that much away.


(author here) To be more specific, here's a benchmark that we ran last year, where we compared schema-aligned parsing against constrained decoding (then called "Function Calling (Strict)", the orange ƒ): https://boundaryml.com/blog/sota-function-calling


I wonder what it would look if you redid the benchmarks, testing against models that have reasoning effort set to various values. Maybe structured output is only worse if the model isn't allowed to do reasoning first?


This setting is new and was introduced in response to the first round of shai hulud attacks.


Google never asked a volunteer for a fix.

This is part of Google’s standard disclosure policy: it gets disclosed within 90 days starting from confirmation+contact.

If ffmpeg didn’t want to fix it, they could’ve just let the CVE get opened.


They do.


Full build with all the codecs, or a custom build with a limited vetted set?


Does it matter?

Like, I don't expect Google to deliver patches for FFmpeg beyond bug fixes or features that directly benefit them, but that's the least you can expect.


It matters to Google if they process public submitted videos using FFmpeg codecs that can be exploited.

One would expect Google to only use FFmpeg with vetted codecs and to either reject videos with codecs that have untrusted FFmpeg modules or to sandbox any such processing, both for increased safety and perhaps to occassionally find new malware "in the wild".


No, this is the unfortunate reality of “ffmpeg is maintained by volunteers” and “CVE discovered on specific untrusted input”.

Google’s AI system is no different than the oss-fuzz project of yesteryear: it ensures that the underlying bug is concretely reproducible before filing the bug. The 90-day disclosure window is standard disclosure policy and applies equally to hobby projects and Google Chrome.


Yeah, it's actually a great bug report. Reproducible and guaranteed to be an actual problem (regardless of how small the problem is considered by the devs). Just seems irresponsible to encourage people not to file bug reports if it's "insignificant". Why even accept reports then?


“This is broken, here’s how I fixed it”

Vs “this is broken, you gave 90 days to fix it”

If you can’t see the difference you’re the existential threat to Free software that stems from the trillion dollar industries that just take.


> you have 90 days to fix it

Or else what? They release the report? That's standard and ffmpeg is open source anyway, anybody can find the bug on their own. There's no threat here.

If you're mad about companies using your software, then don't release it with a license allowing them to use it. Simple as that. I don't understand how people can complain about companies doing exactly what you allowed them to do.


This is great to see, much appreciated for the disclosure!


We’ve had a lot of success implementing schema-aligned parsing in BAML, a DSL that we’ve built to simplify this problem.

We actually don’t like constrained generation as approach - among other issues it limits your ability to use reasoning - and instead the technique we’re using is algorithm-driven error-tolerant output parsing.

https://boundaryml.com/


Love your work , thanks ! , 12 factor agent implementation uses your tools too.


For folks who don't know what Magic Lantern is:

> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.

It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.


More backstory: before the modern generation of digital cameras - Magic Lantern was one of the early ways to "juice" more power out of early generations of Canon cameras, including RAW video recording.

Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.


Funny, when i saw it uses a .fm TLD i thought it's some online radio.


They were trendy at the time :D

I think possibly someone thought it sounded a bit like firmware?


Same :) I had in mind Groove Salad from soma.fm


last.fm


"Scrobbles" will always be a funny word to me.


sub.fm


I wish there are similar projects for other camera brands like Fujifilm. With abilities of ML on old Canon cameras we know there is a lot of potential in those old machines across other brands. It is also "eco" friendly approach that should be supported.


I just switched from Canon to Fujifilm due to enshitification. Canon started charging $5/mo to get clean video out of their cameras. We're plenty screwed if manufacturers decide that cameras are subscriptions and not glass.


Fuji's are great, but ecosystem is definitely smaller, and I've found some software still doesn't support debayering x-trans


Yeah like Adobe. Whatever method they use has been peak worm creation for over 10 years. Capture one and dcraw are head and shoulders better.



it also has a scripting system and is damn fun to mess with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: