Hacker Newsnew | past | comments | ask | show | jobs | submit | wolfi1's commentslogin

fast multiplying for example

I guess they are using it in pulsed mode, continuous mode would be a little bit much power

how about jshell? it comes with every java distribution

Shameless plug for jshelled (jshell for gradle projects) https://github.com/gravitation1/jshelled

yes I thought this was going to be about jshell too.

jshell is amazing, I don't think enough people know about it!


if you look at the numbers carbon capture will be necessary, but unfortunately CCS is neither very efficient nor very useful at the moment, still CO2 has to be removed from the atmosphere, we can only hope that we will be able to do so in the very near future

Surely you're joking Mr Feynman begins in the 30s and takes us to the 60s IIRC, so one has to take into account, what the mainstream was in those times. But he is against hazing, explaining how traumatized European Jews were hazed and reliving their fears in Europe. But, of course, some things cannot be understood nowadays with the mindset we have now.

i don't mean the misogyny but the general vibe of how full of himself/sociopathic he was

there are books from the 19 century written by people with much better values


I did, it was a Spring Boot fat jar with a NLP, I had to deploy it to the biggest instance AWS could offer, the costs were enormous

Java bytecode is always dynamically linked.

still, if I remember correctly I had to reserve 6gig of memory so that the jvm could actually start

I know this joke since the 90s and one has to assume it's even going way back

why do people rebase so often? shouldn't it be excluded from the usual workflows as you are losing commit history as well?

To get a commit history that makes sense. It’s not supposed to document in what order you did the work, but why and how a change was made. when I’m knee deep in some rewrite and realize I should have changed something else first, I can just go do that change, then come back and rebase.

And in the feature branches/merge requests, I don’t merge, only rebase. Rebasing should be the default workflow. Merging adds so many problems for no good reason.

There are use cases for merging, but not as the normal workflow.


That is just not true. Merging is so much less work and the branch history clearly indicates when merging has happened.

With rebasing, there could be a million times the branch was rebased and you would have no idea when and where something got broken by hasty conflict resolution.

When conflicts happen, rebasing is equivalent to merging, just at the commit level instead of at branch level, so in the worst case, developers are met with conflict after conflict, which ends up being a confusing mental burden on less experienced devs and certainly a ”trust the process” kind of workflow for experienced ones as well.


The master branch never gets merged, so it is linear. Finding a bug is very simple with bisect. All commits are atomic, so the failing commit clearly shows the bug.

If you want to keep track of what commits belongs to a certain pr, you can still have an empty merge commit at the end of the rebase. Gitlab will add that for you automatically.

The ”hasty conflict resolution ” makes a broken merge waaaay harder to fix than a broken rebase.

And rebasing makes you take care of each conflict one commit at a time, which makes it order by magnitudes easier to get them right, compared to trying to resolve them all in a single merge commit.


Linear history is nice, but it is lacking the conflict resolutions. They are never committed, and neither are the ”fix rebase” instances.

Having a ”fix broken merge” commit makes it explicit that there was an issue that was fixed.

Rebase sometimes seems like an attempt at saving face.


That’s the whole point. You do it properly, so there IS no conflict.

No. There is a conflict during a rebase, you resolve it, and then it’s like there never was a conflict.

Even if you do it properly, the workflow is erasing history of that conflict existing and needing to be resolved. It leaves no trace of what has been worked on, when, and by whom.


Can you give an example? I think we are talking past each other. This is not my experience at all.

Create new branch A off main.

Do some work on a file, commit 1 to branch A.

Meanwhile, in another branch B created off main, someone else commits changes to the same part of the same file.

That other branch B gets merged to main.

Now, rebase branch A onto main.

The rebase stops at the commit 1 due to a conflict between main and branch A.

Fix the conflict and commit. This erases commit 1 and creates new commit 1' where the conflict has never existed. History has been rewritten.

Rebase successfully completes, branch A now contains different commits than previously, so it will need to be force-pushed to remote if it already exists there. The protocol has resistance against changing history.

Merge branch A to main.

No commit in main now contains any information that there was a conflict that was fixed.

Had a pull request workflow been used, the ”merge main to A” merge commit message would detail which files were conflicting. No such commit is made when using a rebase workflow, chasing those clean fast-forward merges.


Do you know what criss-cross merges are and why they're bad?

I’m sure you’re here to educate me, but this is not about criss-cross merges between two different work branches, this is about whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.

I have an early draft of a blog post about them :) as a source control expert who built both these systems and tooling on top of them for many years, I think they're the biggest and most fundamental reason rebases/linear history are better than merges.

> whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.

The problem with this is that the latter has an infinitely higher chance of resulting in criss-cross merges than the former (which is 0).


It's definitely not 0 because rebase heavy workflows involve the rerere cache which is a minefield of per-repo hidden merge changes. You get the results of "criss-cross merges" as "ghosts" you can't easily debug because there aren't good UI tools for the rerere cache. About the best you can do is declare rerere cache bankruptcy and make sure every repo clears their rerere cache.

I know that worst case isn't all that common or everyone would be scared of rebases, but I've seen it enough that I have a healthy disrespect of rebase heavy workflows and try to avoid them when given the option/in charge of choosing the tools/workflows/processes.


To be honest I've used rebase-heavy workflows for 15 years and never used rerere, so I can't comment on that (been a happy Jujutsu user for a few years — I've always wondered what the constituency for rerere is, and I'm curious if you could tell me!) I definitely agree in general that whenever you have a cache, you have to think about cache invalidation.

rerere is used automatically by git to cache certain merge conflict fixes encountered during a rebase so that you don't have to reapply them more than once rebasing the same branch later. In general, when it works, which is most of the time, it's part of what keeps rebases feeling easy and lightweight despite capturing in the final commit output sometimes a fraction of the data of a real merge commit. The rerere cache is in some respects a hidden collection of the rest of a merge commit.

In git, the merge (and merge commit) is the primitive and rebase a higher level operation on top of them with a complex but not generally well understood cache with only a few CLI commands and just about no UI support anywhere.

Like I said, because the rerere cache is so out-of-sight/out-of-mind that's why problems with it become weird and hard to debug. The situations I've seen that have been truly rebase-heavy workflows with multiple "git flow" long-running branches and even sometimes cherry picking between them. (Generally the same sorts of things that create "criss-cross merge scenarios".) Rebased commits start to bring in regressions from other branches. Rebased commits start to break builds randomly. If what is getting rebased is a long-running branch you probably don't have eyes on every commit, so finding where these hidden merge regressions happen becomes full branch bisects, you can't just focus on merge commits because you don't have them anymore, every commit is a candidate for a bad merge in a rebased branch.

Personally, I'd rather have real merge commits where you can trace both parents and the code not from either parent (conflict fixes), and you don't have to worry about ghosts of bad merges showing up in any random commit. Even the worst "criss-cross merge" commits are obvious in a commit log and I've seen have had enough data to surgically fix, often nearly as soon as they happen. rerere cache problems are things that can go unnoticed for weeks to everyone's confusion and potentially a lot of hidden harm. You can't easily see both parents of the merges involved. You might even have multiple repos with competing rerere caches alternating damage.

But also yes rerere cache problems are so generally infrequent that it might also take weeks of research, when it does happen, just to figure out what the rerere cache is for, that it might be the cause of some of your "merge ghosts" haunting your codebase, and how to clean it.

Obviously by the point where you are rebasing git flow-style long runnning branches and using frequent cherry picks you're in a rebase heavy workflow that is painful for other reasons and maybe that's an even heavier step beyond "rebase heavy" to some, but because the rerere cache is involved to some degree in every rebase once you stop trusting the rerere cache it can be hard to trust any rebase heavy workflow again. Like I said, personally I like the integration history/logs/investigatable diffs that real merge commits provide and prefer tools like `--first-parent` when I need "linear history" views/bisects.


You have to turn rerere on, though, right? I've never done that. I've also never worked with long-running branches — tend to strongly prefer integrating into main and using feature flags if necessary. Jujutsu doesn't have anything like rerere as far as I know.

Hmm, yeah looks like it is default off. Probably some git flow automation tool or other sort of bad corporate/consultant disseminated default config at a past job left the impression that it was default on. It's the solution to a lot of papercuts working with long-running branches as well as the source of new problems as stated above; problems that are visible with merge commits but hidden in rebases.

Your real commit history is irrelevant. I don't care too much about how you came to a particular state.

The overall project history though, the clarity of changes made, and that bisecting reliably works are important to me.

Or another way; the important unit is whatever your unit of code review is. If you're not reviewing and checking individual commits, they're just noise in the history; the commit messages are not clear and I cannot reliably bisect on them (since nobody is checking that things build).


I write really poopy commit messages. Think "WIP" type nonsense. I branch off of the trunk, even my branch name is poopy like

feature/{first initial} {last initial} DONOTMERGE {yyyy-MM-dd-hh-mm-ss}

Yes, the branch name literally says do not merge.

I commit anything and everything. Build fails? I still commit. If there is a stopping point and I feel like I might want to come back to this point, I commit.

I am violently against any pre commit hook that runs on all branches. What I do on my machine on my personal branch is none of your business.

I create new branches early and often. I take upstream changes as they land on the trunk.

Anyway, this long winded tale was to explain why I rebase. My commits aren't worth anything more than stopping points.

At the end, I create a nice branch name and there is usually only one commit before code review.


Isn't your tale more about squashing than rebasing?

Any subsequent commits and the branch are inherently rebased on the squashed commit.

Rebasing is kind of a short hand for cherry-picking, fixing up, rewording, squashing, dropping, etc. because these things don't make sense in isolation.


I guess my point is that I disagree that rebasing should be shorthand for all these things that aren't rebasing.

Well rebasing is exactly equivalent to moving the branch and then cherry-picking, and the others are among the commands available in rebase --interactive.

Personally i squash using git rebase -i

I don't want to see any irrelevant history several years later, so I enforce linear history on the main branch in all projects that I work on. So far, nobody complained, and I've never seen a legitimate reason to deviate from this principle if you follow a trunk based release model.

why would you lose commit history? You are just picking up a set of commits and reapplying them. Of course you can use rebase for more things, but rebase does not equal losing commit history.

Rebase always rewrites history, losing the original commits and creating new ones. They might have the same changes and the same commit messages, but they are different commits.

I think that only the most absolutely puritan git workflows wouldn’t allow a local rebase.

The sum of the re-written changes still amount to the same after a rebase. When would you need access to the pre-rebase history, and to what end?

Well, sometimes you do if you made a mistake, but that's already handled by the reflog.

Because gerrit.

But even if i wasn't using gerrit, sometimes its the easiest way to fix branches that are broken or restructure your work in a more clear way


really; keep reading about all the problems ppl have “every time I rebase” and I wonder what tomfoolery they’re really up to

Unlike some other common operations that can be easily cargo-culted, rebasing is somewhat hard to do correctly when you don't understand git, so people who don't understand git get antagonistic towards it.

Rebasing is basically working at the meta layer, when you are editing patches instead of the code that is being versionned. And due to that, it requires good understanding of the VCS.

Too often, merges is only understood as bring the changes from there to here, it may be useful especially if you have release candidates branches and hotfixes. And you want to keep a trave of that process. But I much prefer rebasing and/or squashing PR onto the main branch.


If it is something like repo for configuration management I can understand that because its often a lot of very small changes and so every second commit would be a merge, and it's just easier to read that way.

... for code, honestly no idea


If only there was a way to ignore merges from git log, or just show the merges…

(Hint: --no-merges, --merges)


hardened images are cool, definitely, but I'm not sure what it actually means? just systems with the latest patches or stricter config rules as well?for example: would any of these images have mitigated or even prevented Shai-Hulud [12]?


Docker Hardened Images integrate Socket Firewall, which provides protection from threats like Shai-Hulud during build steps. You can read our partnership announcement over here: https://socket.dev/blog/socket-firewall-now-available-in-doc...


Docker Hardened Images are built from scratch with the minimal packages to run the image. The hardened images didn't contain any compromised packages for Shai-Hulud.

https://www.docker.com/blog/security-that-moves-fast-dockers...

Note: I work at Docker


yeah, but if you would have installed with npm your software, would the postinstall script have been executed?


Hardened base images don't restrict what you add on top of them. That's where scanners like Docker Scout, Trivy, Grype, and more come in to review the complete image that you have built.


Of course? They are only concerned with the base image. What you do with it is your responsibility

This would be like expecting AWS to protect your EC2 instance from a postinstall script


The difference is that they’re charging extra for it, so people want to see benefits they could take to their management to justify the extra cost. The NPM stuff has a lot of people’s attention right now so it’s natural to ask whether something would have blocked what your CISO is probably asking about since you have an unlimited number of possible security purchase options. One of the Docker employees mentioned one relevant feature: https://socket.dev/blog/socket-firewall-now-available-in-doc...

Update the analogy to “like EC2 but we handle the base OS patching and container runtime” and you have Fargate.


so what is the take away message? fire only the senior devs cause they cost too much and can't use AI?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: