Hacker Newsnew | past | comments | ask | show | jobs | submit | Rucadi's commentslogin

I also got the same feeling from that, in fact, I would go as far as to say that nixpkgs and nix-commands integration with git works quite well and is not an issue.

So the phrase the article says "Package managers keep falling for this. And it keeps not working out" I feel that's untrue.

The most issue I have with this really is "flakes" integration where the whole recipe folder is copied into the store (which doesn't happen with non-flakes commands), but that's a tooling problem not an intrinsic problem of using git


The best part of statement expressions is that a return there returns from the function itself, not from the statement expr.

I use that with with macros to return akins to std::expected, while maintaining the code in the happy-path like with exceptions.


This is also somewhat common in c++ with immediate-invoked lambdas


The same pattern can also be useful in Rust for early returning Result<_,_> errors (you cannot `let x = foo()?` inside of a normal block like that).

    let config: Result<i32, i32> = {
        Ok(
            "1234".parse::<i32>().map_err(|_| -1)?
        )
    };
would fail to compile, or worse: would return out of the entire method if surrounding method would have return type Result<_,i32>. On the other hand,

    let config: Result<i32, i32> = (||{
        Ok(
            "1234".parse::<i32>().map_err(|_| -1)?
        )
    })();
runs just fine.

Hopefully try blocks will allow using ? inside of expression blocks in the future, though.


A blog post for it from a prominent c++er https://herbsutter.com/2013/04/05/complex-initialization-for...


Yeah but languages that make you resort to this then don't let you simply return from the block.

And the workarounds often make the pattern be a net loss in clarity.


it makes function declarations/instantiations much more grep-able.


auto has a good perk, it prevents uninitialized values (Which is a source of bugs).

For example:

auto a;

will always fail to compile not matter what flags.

int a;

is valid.

Also it prevents implicit type conversions, what you get as type on auto is the type you put at the right.

That's good.


uninitialized values are not the source of bugs. This is a good way to find logic errors in code (e.g. using sanitizer)


"bug" can refer to many categories of problems, including logic errors. I've certainly seen uninitialized variables be a source of bugs. Stackoverflow for example is full of discussions about debugging problems caused by uninitialized variables, and the word "bug" is very often used in those contexts.

What do you mean it is not a source of bugs?


> What do you mean it is not a source of bugs?

I think what they mean, and what I also think is that the bug does not come from the existence of uninitialized variables. It comes from the USE of uninitialized variables. Making the variables initialized does not make the bug go away, at most it silences it. Making the program invalid instead (which is what UB fundamentally is) is way more helpful for making programs have less bugs. That the compiler still emits a program is a defect, although an unfixable one.

As to my knowledge C (and derivatives like C++) is the only common language where the question "Is this a program?" has false positives. It is certainly an interesting choice.


I mean that bug is *not in* uninitialized variable - bug in program logic. E.g. one of code pathes don't initialize variable.

So, I see uninitialized variables as a good way to find such logic errors. And, therefore, advice to always initialize variable - bad practice.

Of course if you already have a good value to initialize variable - do it. But if you have no - better leave it uninitialized.

Moreover - this will not cause safety issues in production builds because you can use `-ftrivial-auto-var-init` to initialize automatic variables to e.g. zeroes (`-fhardened` will do this too)


This is indeed exactly correct. Probably on its own this is the most important reason for most people to use it, as I think most of the millions of C++ developers in the world (and yes there are apparently millions) are not messing with compiler flags to get the checks that probably should be there by default anyway. The keyword auto gives you that.


This will end up with people creating their pages in top of godot engine to avoid html scrapping hahaha


I guess that would kill accessibility as well.


You may jest, but a more practical approach would be to compile a traditional app to WASM, say using Rust + egui (which has a native WASM target).


If you really need a portable binary that uses shared libraries I would recommend building it with nix, you get all the dependencies including dynamic linker and glibc.


Nix allows you to do this with any language and required dependency: https://wiki.nixos.org/wiki/Nix-shell_shebang


Now you have to set up Nix first and deal with that nightmare of a learning curve. Just to auto-install some dependencies for a script.

Might as well rewrite all your scripts in Rust too while you're layering on unholy amounts of complexity.


It’s like Vim, you learn it once, and you keep using it forever once you’re used to it.

I’m so thankful to see a flake.nix file in every single cool project on code forges.


Yea that's a common theme of excuses for both Rust and Nix. Wrong though, because most anyone who can use a computer at all can learn the basics of Vim.

Seeing that flake.nix badge of complexity lets me know a project will be a nightmare to set up and will break every other week. It's usually right next to the Cargo.toml badge with 400 dependencies underneath.


To be honest I don't know what to say, you can use nix in many ways, and you don't even require to know the language.

The easiest entry-point is to just use it like a package manager, you install nix (which is just a command...) and then you have available the whole set of packages which are searchable from here: https://search.nixos.org/packages

nix-shell is just to download&add programs temporary to your PATH.

I don't feel that this is harder than something like "sudo apt install -y xxxxx" but for sure more robust and portable, and doesn't require sudo.

If at some point you want to learn the language in order to create configurations or packaging software, it may require to check a lot more documentation and examples, but for this I think it's pretty straightforward and is not harder than any other package manager like aptitude, homebrew or pacman.


Nix with Flakes never randomly break, I still have projects from 3 or 4 years ago that I can still run `nix build` and getting it running. Yes, if you try to update the `flake.lock` this may introduce breakages, but this is expected if you're pining `nixos-unstable` instead of a stable branch.


I’m not sure what you mean by “a nightmare to set up”. You install Nix on your current OS with the determinate.systems installer, and you enter `nix run github:johndoe/project-containing-a-flake-dot-nix-file` to try out the project and have the full reproducible build taken care of by Nix.

Sure, installing packages the proper way requires a little bit more setup (Home Manager, most likely, and understanding where is the list of packages and which command to build to switch configuration), but as trivial as other complex tasks most of us hackers are capable of doing (like using `jq` or Vim).


Whoa! This is a revelation. I already loved Nix and used nix-shell extensively, but this is the missing piece: fully reproducible Python scripts without compromise.


Or install direnv and put your dependencies into a shell.nix, setup the bash hook according the manual, create the .envrc with the content "use nix". Then type "direnv allow".

Then you can do e.g. use other persons python scripts without modifying their shebang.


If the source is known, it is not less bad that downloading a program and running it


It is if the script is written badly, gets truncated while it's being downloaded, and fails to account for this possibility.

Look into tailscale's installation script, they wrapped everything into a function which is called in the last line — you either download and execute every line, or it does nothing.


This "what if it gets truncated in the middle of the download, and the half-run script does something really bad" objection gets brought up every time "curl | bash" is mentioned, and it always feels like "what if a cosmic ray flips a bit in your memory which makes the kernel erase your hard drive". Like, yes, it could happen in the same way getting killed by a falling asteroid could happen, but I'm not losing sleep over it.


Serious question, why or how would a script get truncated when transferred over https?


Just living far from major datacenters is enough. I get truncated downloads pretty regularly, maybe a couple times a month or so. The network isn't really all that reliable when you consistently use it across the globe.

It usually happens on large files though, due to simple statistics, but given enough users, not hard to imagine it happening with a small script...


That's easily fixed by adding Content-Length headers.


You pull the Ethernet cable out before it finishes. Or your wifi router hiccups


Wouldn’t the download terminate without emitting the script?


That's quite uncommon. Typically your distribution checks that the downloaded source/binary has the correct checksum and an experienced maintainer checked the (sandboxed) installation. Here someone puts an arbitrary script online that runs with your user's permission and you hope that the web page is not hijacked and some arbitrary dev knows how to write bash scripts.


yes just use brige networking instead of nat


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: