The argument for not using electric sharpeners is that they (1) cut down the lifetime of your knife substantially and (2) they do a mediocre job of sharpening.
Mechanically, it's just high-abrasive motorized spinning discs at preset angles. So rather than getting a good edge by taking a few microns of material off by doing it manually, you get an OK edge by taking 0.2mm off at a time. (If 0.2mm doesn't sound like a lot, think about how many mm wide your knife is.)
---
I'm personally 50-50 on this advice: most people don't sharpen their knives at all, and I think people are better off getting 10 OK years out of a knife than 50 terrible years out of it.
I still sharpen my knives on a whetstone, but given the general cost trajectory of most manufactured items, I've decided that I'm okay if I wear out my knives. Buying a new chef's knife in 10 years is basically free on a per-day-of-use basis.
(I say that, but I'm still using knives that mostly range from 25-50 years old, but some didn't get sharpened enough when they belonged to our parents and grandparents.)
I landed on using a diamond stone with 300 grit and 1000 grit. Unlike whetstones they never need to be flattened. I just use one of those cheap plastic angle guides. After a bit of practice you will learn to hold the angle well enough. Finish with a leather strop and some polishing compound and I can keep my knives shaving-sharp with only a few minutes effort before I cook.
There's a lot of tooling built on static binaries:
- google-wide profiling: the core C++ team can collect data on how much of fleet CPU % is spent in absl::flat_hash_map re-bucketing (you can find papers on this publicly)
- crashdump telemetry
- dapper stack trace -> codesearch
Borg literally had to pin the bash version because letting the bash version float caused bugs. I can't imagine how much harder debugging L7 proxy issues would be if I had to follow a .so rabbit hole.
I can believe shrinking binary size would solve a lot of problems, and I can imagine ways to solve the .so versioning problem, but for every problem you mention I can name multiple other probable causes (eg was startup time really execvp time, or was it networked deps like FFs).
In this section, we will deviate slightly from the main topic to discuss static linking.
By including all dependencies within the executable itself, it can run without relying on external shared objects.
This eliminates the potential risks associated with updating dependencies separately.
Certain users prefer static linking or mostly static linking for the sake of deployment convenience and performance aspects:
* Link-time optimization is more effective when all dependencies are known. Providing shared object information during executable optimization is possible, but it may not be a worthwhile engineering effort.
* Profiling techniques are more efficient dealing with one single executable.
* Dynamic linking involves PLT and GOT, which can introduce additional overhead. Static linking eliminates the overhead.
* Loading libraries in the dynamic loader has a time complexity `O(|libs|^2*|libname|)`. The existing implementations are designed to handle tens of shared objects, rather than a thousand or more.
Furthermore, the current lack of techniques to partition an executable into a few larger shared objects, as opposed to numerous smaller shared objects, exacerbates the overhead issue.
In scenarios where the distributed program contains a significant amount of code (related: software bloat), employing full or mostly static linking can result in very large executable files.
Consequently, certain relocations may be close to the distance limit, and even a minor disruption (e.g. add a function or introduce a dependency) can trigger relocation overflow linker errors.
(author here) To be more specific, here's a benchmark that we ran last year, where we compared schema-aligned parsing against constrained decoding (then called "Function Calling (Strict)", the orange ƒ): https://boundaryml.com/blog/sota-function-calling
I wonder what it would look if you redid the benchmarks, testing against models that have reasoning effort set to various values. Maybe structured output is only worse if the model isn't allowed to do reasoning first?
Like, I don't expect Google to deliver patches for FFmpeg beyond bug fixes or features that directly benefit them, but that's the least you can expect.
It matters to Google if they process public submitted videos using FFmpeg codecs that can be exploited.
One would expect Google to only use FFmpeg with vetted codecs and to either reject videos with codecs that have untrusted FFmpeg modules or to sandbox any such processing, both for increased safety and perhaps to occassionally find new malware "in the wild".
No, this is the unfortunate reality of “ffmpeg is maintained by volunteers” and “CVE discovered on specific untrusted input”.
Google’s AI system is no different than the oss-fuzz project of yesteryear: it ensures that the underlying bug is concretely reproducible before filing the bug. The 90-day disclosure window is standard disclosure policy and applies equally to hobby projects and Google Chrome.
Yeah, it's actually a great bug report. Reproducible and guaranteed to be an actual problem (regardless of how small the problem is considered by the devs). Just seems irresponsible to encourage people not to file bug reports if it's "insignificant". Why even accept reports then?
Or else what? They release the report? That's standard and ffmpeg is open source anyway, anybody can find the bug on their own. There's no threat here.
If you're mad about companies using your software, then don't release it with a license allowing them to use it. Simple as that. I don't understand how people can complain about companies doing exactly what you allowed them to do.
We’ve had a lot of success implementing schema-aligned parsing in BAML, a DSL that we’ve built to simplify this problem.
We actually don’t like constrained generation as approach - among other issues it limits your ability to use reasoning - and instead the technique we’re using is algorithm-driven error-tolerant output parsing.
> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.
It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.
More backstory: before the modern generation of digital cameras - Magic Lantern was one of the early ways to "juice" more power out of early generations of Canon cameras, including RAW video recording.
Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.
I wish there are similar projects for other camera brands like Fujifilm. With abilities of ML on old Canon cameras we know there is a lot of potential in those old machines across other brands. It is also "eco" friendly approach that should be supported.
I just switched from Canon to Fujifilm due to enshitification. Canon started charging $5/mo to get clean video out of their cameras. We're plenty screwed if manufacturers decide that cameras are subscriptions and not glass.
Mechanically, it's just high-abrasive motorized spinning discs at preset angles. So rather than getting a good edge by taking a few microns of material off by doing it manually, you get an OK edge by taking 0.2mm off at a time. (If 0.2mm doesn't sound like a lot, think about how many mm wide your knife is.)
---
I'm personally 50-50 on this advice: most people don't sharpen their knives at all, and I think people are better off getting 10 OK years out of a knife than 50 terrible years out of it.
I'm also not willing to learn how to use a whetstone, so I landed in the middle on this: https://worksharptools.com/products/precision-adjust-knife-s...