And that kind of NAT effectively doesn't exist in practice, so that's quite beside the point. Such a NAT doesn't scale to more than 24 devices behind it.
See my reply to your sibling commenter. My comment was not about NAT in general, i.e. I was not denying the very real existence of stateless NAT. Rather, I was disputing the usefulness of the NAPT solution proposed above as a solution to public IPv4 address exhaustion.
> proposed above as a solution to public IPv4 address exhaustion.
It was not proposed as a solution (although, it would work). I'm pointing out that in networking many names are conflated/used generally against their specific definition. NAT/Firewall; Router/Access Point/Gateway; etc.
No, it very much does. If you want to join two network segments such that on one side all devices are on 10.1.X.X and the other all devices are 10.2.X.X, you'd use a mapping between 10.1.a.b and 10.2.a.b
The general context here is about NATting to the public internet at large, not between particular segments. And the parent of my comment was talking specifically about NAPT, which is different from the non-port-based NAT that you're talking about.
Yeah, as a solo dev quite new to frontend, that made me nope out of React almost immediately. Having to choose a bunch of critically important third-party dependencies right out of the gate? With how much of a mess frontend deps seem to be in general? No thanks.
I settled on Svelte with SvelteKit. Other than stumbling block that was the Svelte 4 -> 5 transition, it's been smooth sailing. Like I said, I'm new here in the frontend world and don't have much to judge by. But it's been such a relief to have most things simply included out of the box.
Even when it's the same router package, these things break backward compatibility so often that different versions of the same package will behave differently
That router thing seems crazy. I'm all for having options that are available. But not having, at the minimum, some blessed implementations for basic stuff like routers seems nuts. There is so much ecosystem power in having high-quality, blessed implementations of things. I'm coming from working primarily in Go, where you can use the stdlib for >80% of everything you do (ymmv), so I feel this difference very keenly.
> There is so much ecosystem power in having high-quality, blessed implementations of things.
Indeed. I work mainly in Angular because while it's widely regarded as terrible and slow to adapt, it's predictable in this regard.
Also now with typed forms, signals and standalone components it's not half bad. I prefer Svelte, but when I need Boring Technology™, I have Angular.
90%+ of all web apps are just lists of stuff with some search/filtering anyway, where you can look up the details of a list entry and of course CRUD it via a form. No reason to overthink it.
I know you are saying you do work mainly in Angular, but for others reading this, I don't think this is giving modern Angular the credit it deserves. Maybe that was the case in the late 20-teens, but the Angular team has been killing it lately, IMO. There is a negative perception due to the echo chamber that is social media but meanwhile, Angular "just works" for enterprise and startups who want to scale alike.
I think people who are burned on on decision fatigue with things like React should give Angular another try, might be pleasantly surprised how capable it is out of the box, and no longer as painful to press against the edges.
Strong disagree. Angular is cursed to the bone. It got a bit better recently but its still just making almost everything totally overcomplicated and bloated.
I'd say what you call bloated is in many cases basic functionality that I don't have to go looking for some third party package to fill. There is something to be said for having a straightforward and built-in way to do things, which leads to consistency between Angular projects and makes them easier to understand and onboard to.
IMO, it is only as complicated or simple as you want to make it these days, and claiming otherwise likely is due to focusing on legacy aspects rather than the current state of the framework.
FWIW, I'm not arguing that it's the "best" or that everyone should use it. Or that it doesn't still have flaws. Just that it is still firmly in the top set of 3-5 frameworks that are viable for making complex web apps and it shouldn't be dismissed out of hand.
Not only did it provide that background pressure, but desktop software is a complex domain. So it often pushes the bounds of Linux software overall. Systemd is the example I have in mind here, but I’m sure there are others too that I’m not thinking of.
The model only sees a stream of tokens, right? So how do you signal a change in authority (i.e. mark the transition between system and user prompt)? Because a stream of tokens inherently has no out-of-band signaling mechanism, you have to encode changes of authority in-band. And since the user can enter whatever they like in that band...
But maybe someone with a deeper understanding can describe how I'm wrong.
When LLMs process tokens, each token is first converted to an embedding vector. (This token to vectors mapping is learned during training.)
Since a token itself carries no information about whether it has "authority" or not, I'm proposing to inject this information in a reserved number in that embedding vector. This needs to be done both during post-training and inference. Think of it as adding color or flavor to a token, so that it is always very clear to the LLM what comes from the system prompt, what comes from the user, and what is random data.
This is really insightful, thanks. I hadn't understood that there was room in the vector space that you could reserve for such purposes.
The response from tempaccsoz5 seems apt then, since this injection is performed/learned during post-training; in order to be watertight, it needs to overfit.
You'd need to run one model per authority ring with some kind of harness. That rapidly becomes incredibly expensive from a hardware standpoint (particularly since realistically these guys would make the harness itself an agent on a model).
I assume "harness" here just means the glue that feeds one model's output into that of another?
Definitely sounds expensive. Would it even be effective though? The more-privileged rings have to guard against [output from unprivileged rings] rather than [input to unprivileged rings]. Since the former is a function of the latter (in deeply unpredictable ways), it's hard for me to see how this fundamentally plugs the whole.
I'm very open to correction though, because this is not my area.
My instinct was that you would have an outer non-agentic ring that would simply identify passages in the token stream that would initiate tool use, and pass that back to the harness logic and/or user. Basically a dry run. But you might have to run it an arbitrary number of times as tools might be used to modify/append the context.
ROM bootloader loads a second stage bootloader (e.g. [1]). The ROM bootloader verifies that the second stage loader is signed with keys fused into the MCU. The second stage bootloader in turn verifies application images shipped by the vendor, using a different set of keys.
When the vendor discontinues support for the device, they make available to their customers an optional update to the second stage bootloader that allows any application image to run, not just images signed by the vendor. This updated second stage loader is signed with the keypair fused into the MCU, and so it will run as per normal. They ideally make it so this update can only be installed with some sort of local interaction with the device, not automatically over the air.
Devices in the field running ordinary OEM firmware continue to be protected from malicious OTA updates. Customers who wish to unlock their devices also have the means to do so.
This is very technically straightforward to implement, but it needs to be considered when the device is designed. Regulations would be required to make sure that happens.
That does sound better, but haven't you just made the unlocked seconds stage bootloader functionally equivalent to secure boot keys?
Instead of [get released SB keys] -> [boot arbitrary payloads]
It becomes [get unlocked second stage bootloader] -> [boot arbitrary payloads]
Although, I guess that the details matter in terms of the process used to supply OTAs and second stage bootloaders. If changing to the unlocked bootloader requires physical access (or some such thing), then I could see that working.
Secure boot is desirable for a lot of reasons including design protection (stopping your product being cloned), supply chain security, preventing malicious updates etc.
The question is one of how you can hand control to the user without endangering your legitimate commercial interests as well as the security of the rest of the fleet, exactly how you tackle that will depend on the product.
There are two parts of "sending encrypted files": the encryption and the sending. An offline tool (e.g. PGP or age) seems only necessary when you want to decouple the two. After all, you can't do the sending with an offline tool (except insofar as you can queue up a message while offline, such as with traditional mail clients).
The question thereby becomes "Why decouple the sending from encryption?"
As far as I can see, the main (only?) reason is if the communication channel used for sending doesn't align with your threat model. For instance, maybe there are multiple parties at the other end of the channel, but you only trust one of them. Then you'd need to do something like encrypt the message with that person's key.
But in the use-case you mentioned (not wanting to publicly post a log file), I don't see why that reason would hold; surely the people who would send you logs can trust trust Signal every bit as easily as PGP. Share your Signal username over your existing channel (the mailing list), thereby allowing these people to effectively "upgrade" their channel with you.
Sticking to the use case of serving that 0.1% of users, why can’t a service or other encrypted transport be a solution? Why doesn’t Signal fit the bill for instance?
Why not Just(TM) enforce a reproducible build process? That brings some of its own challenges, but would represent a real upgrade over building out some Swiss cheese like this.
reply