I learnt this the hard way: if anyone is sending multiple emails, with seemingly very important titles and messages, and they get no reply at all, the receiver likely haven’t received your email rather than completely ghosting you.
Everyone should know this, and at least try a different channel of communication before further actions, especially from those disclosing vulnerability.
Small bugs? May be. But there’s a lot of lack of functionality and stability. I’d recommend InFuse if anyone is hitting those problems. If it has been running fine for you then there’s no need to switch.
The problem is related to source codec. Depending on that you’ll have difference experience. So that’s why the experience varies because there’s vast differences in source formats.
A good client not only handles well on some sources, but many if not all.
To be a bit picky, there’s no unprocessed photo. They start with a minimally processed photo and take it from there.
The reason I clicked is that when I saw the title, I’m tempted to think they might be referring to analog photo (ie film). In that case I think there’s a well defined concept of “unprocessed” as it is a physical object.
For digital photo, you require at least a rescaling to turn it to grayscale as the author did. But even that, the values your monitor shows already is not linear. And I’m not sure pedagogically it should be started with that, as the authors mention later about the Bayer pattern. Shouldn’t “unprocessed” come with the color information? Because if you start from gray scale, the color information seems to be added from the processing itself (ie you’re not gradually adding only processing to your “unprocessed” photo).
To be fair, representing “unprocessed” Bayer pattern is much harder as the color filter does not nicely maps to RGB. If I were to do it I might just map the sensor RGB to just RGB (with default color space sRGB) and make a footnote there.
I think there’s a spectrum and you said it as if there’s only two sides.
For me personally, I built my “data centre” as cheap as possible, but there’s a few requirements that the computers you’re using would not cut it: storage server must be using ZFS with ECC. I started this around a decade ago and I only spent ~$300 at the time (reusing old PSU and case I think).
There are many requirements of a data centre that can be relaxed in a home lab settings, up time, performance, etc. but I would never trade data integrity for tiny bit of savings. Sadly this is a criteria that many, including some of those building very sophisticated home cluster, didn’t set as a priority.
Nix is for reproducibility. Nix and docker are orthogonal. You can create reproducible docker image via nix. You can run nix inside docker on systems that doesn’t allow you to create the nix store.
Some people take Moore’s law in a strong sense: doubling rate is a constant. That is long dead.
But if we relax it to be a slowly varying constant, then it is not dead. That constant has been changed (by consensus) for a few times already.
Your mistake is to (1) take that constant literally (ie using the strong law) and (2) uses the boundary points to find the “average” effect. The latter is a really flawed argument as it cannot prove it hasn’t been dead (a recent effect) because you haven’t considered it’s change over time.
This is equivalent to inverse variance weighting. For independent random variable, this is the optimal method to combine multiple measurements. He just used a different way to write the formula and connect that to other kinds of functions.
He also frames it as a different goal too: normally when we (as a physicist) talks about the random variables to combine, we think of it as different measurements of the same thing. But he didn’t even assume that: he’s saying if you want to have a weighted sum of random variables, not necessarily expected to be a measurement of the same thing (eg share same mean), this is still the optimal solution if all care is minimal variance. His example is stock, where if all you care is your “index” being less volatile, inverse variance weighting is also optimal.
As I’m not a finance person, this is new to me (the math is exactly the same, just different conceptually in what you think the X_i s are).
I wish he mention inverse variance weighting just to draw the connection though. Many comments here would be unnecessary if he did.
reply