Hacker Newsnew | past | comments | ask | show | jobs | submit | noctune's commentslogin

Some patterns must happen to repeat, so I would assume the offset to be larger, no?

You might be interested in https://connectrpc.com/. It's basically what you describe, though it's not clear to me how well supported it is.


Yeah that one looked good. I don't remember why I didn't use it that time, maybe just felt it was easy enough to DIY that I didn't feel like using another dep (given that I already knew express and proto in isolation). The thing is, Google themselves had to lead the way on this if they wanted protobuf to be mainstream like JSON.


It doesn't help that URLs are badly designed. It's a mix of left- and rightmost significant notation, so the most significant part is in the middle of the URL and hard to spot for someone non-technical.

Really we should be going to com.ycombinator.news/item?id=45789474 instead.


That's how it was in the good ol Usenet days! Eg alt.tv.simpsons. Not sure how URLs ended up being the other way round.


I disagree. We write left to right, so it makes sense when the URL is essentially two parts ("external" and "internal" in regards to "place on the network", "location on the server") they are written left to right and then separated in the middle.

Plus it would make using autocomplete way harder, since I can write "news.y" and get already suggested this site, or "red" and get reddit. If you were to change that, you'd need to type _at least_ "com.yc" to maybe get HN, unless you create your own shortcuts.

Conveniently enough, my browser displays the URL omitting the protocol (assuming HTTPS) and only shows host and port in black, and path+query+fragment


But the domain name is not written "left to right", is the problem.

As far as autocomplete goes, what you're describing is a behavior of one particular implementation. If URLs looked differently, autocomplete would behave differently as well.

I'm also reminded of https://xkcd.com/1172/


Damn, now I want something we'll never have.


I built something similarly a few years ago for `sort | uniq -d` using sketches. The downside is you need two passes, but still it's overall faster than sorting: https://github.com/mpdn/sketch-duplicates


I overall agree with the article; GitOps is great for managing long-lived, shared, stable systems you need a good audit trail for (like production), but testing isn't one of these. Test environments should ideally just be something non-shared you can just spin up and make changes to without asking for permission.


I don't understand why features like S3's "downloader pays" isn't more widely used (and available outside AWS). Let the inefficient consumer bear their own cost.

Major downside is that this would exclude people without access to payment networks, but maybe you could still have a rate-limited free option.


You can condition IAM on Nitro attestation, so that's doable (if a lot more work than usual).


The XZ utils supply chain attack also used this to sneakily disable Linux Landlock: https://news.ycombinator.com/item?id=39874404


You can use a radix heap rather than a binary heap. I have an implementation here, with benchmarks using pathfinding: https://github.com/mpdn/radix-heap

It has the nice property that the amortized cost of pushing/popping an element is independent of the number of other elements in the heap.


S3 recently got conditional writes and you can use do locking entirely in S3 - I don't think they are using this though. Must be too recent an addition.


I believe S3 can only do create-if-not-exist, which won't help for overwriting a pre-existing branch ref only-if-not-concurrently-updated.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: