Hacker Newsnew | past | comments | ask | show | jobs | submit | mjevans's commentslogin

10. is /8 (24 payload bits), 172.16 is /12 (so 22) and 192.168 is /16. Very little need to spend more than 18 bits of space to map every 'usable' private IPv4 address once per customer. Probably also less than 14 bits (16k) of customers to service.

There's more addresses I didn't know about offhand but found when looking up the 'no DHCP server' autoconf IP address range (Link Local IPv4).

https://en.wikipedia.org/wiki/IPv4#Special-use_addresses


That's all true on a statement level, but doesn't make an IPv4:IPv4 NAT solution better than either VRF/encap or IPv6 mapping.

The benefit with VRF/encap is that the IPv4 packets are unmodified.

The benefit with IPv6 mapping is that you don't need to manage IPv4:IPv4 tables and have a clear boundary of concerns & zoning.

In both cases you don't give a rat's ass which prefixes the customer uses. That math/estimation you're doing there… just entirely not needed.


Or Office Space (warning, Rated R for some language and crude distractions / conversations.) https://www.imdb.com/title/tt0151804

Still, it documents the typical work culture of the US in the late 1990s / early 2000s. It's sad and amazing how much of that remains the same.


It was VERY common in the spinning rust era to already open (office, etc) applications in the background. I think the launch operation only allocated window resources and finished the job; all the hit the disk work was already precached in memory while the OS was doing the slow computer starting up / logging into the network steps and the user was off getting a coffee or something.

So they'll still make me toss out my dang sunscreen.

No, they'd make you take it out if the scanner / person is unable to classify the object.

Education, real education, can be made entertaining. Mythbusters and Connections (I believe it was called) both qualify. As do some historic documentaries.

Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.

The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.

We aren't anywhere near general intelligence yet.


Ignoring your last line, which is poorly defined, this view contradicts observable reality. It can’t explain an LLM’s ability to diagnose bugs in code it hasn’t seen before, exhibit a functional understanding of code it hasn’t seen before, explain what it’s seeing and doing to a human user, etc.

Functionally, on many suitably scoped tasks in areas like coding and mathematics, LLMs are already superintelligent relative to most humans - which may be part of why you’re having difficulty recognizing that.


It's roughly the same price (or even more expensive) and doesn't include Outlook... which is THE crack application for all those windows addicts.

You could absolutely nail the document compatibility aspect and it still wouldn't be enough because of freaking Outlook.


10 years ago I would have agreed with you but these days.. Outlook has been crapped on so much that Google Workspaces are competitive imo

Agreed, the 'new' outlook destroyed everything that was good about outlook. Which wasn't even all that good by the way, it was just the best but that says more about the competition than about outlook itself.

I got it backwards because I expected the counterfeit part to use a newer process IC (less silicon area) than a possibly more reliable and perfectly suitable for serial connection speeds 'vintage' process on some long stable spin of silicon.

Why allow for newer processes on the counterfeit? They'd implement it using the least expensive, most mass produced chips possible, which are more likely to be cut from wafers hitting the sweet spot of size / feature and price crossover.


I wonder what sort of training data the AI was fed with. It's possible that such if whatever was utilized most was put together into a reference cookbook a human could do most of the work almost as fast based on more normal searches of that data in an overall more efficient way.

While this is true, why even bother turning on encryption and making it harder on disk data recovery services in that case?

Inform, and Empower with real choices. Make it easy for end users to select an alternate key backup method. Some potential alternatives: Allow their bank to offer such a service. Allow friends and family to self host such a service. Etc.


Stolen laptops would be my one idea here to always encrypt, even if MS / Apple has your key and can easily give it to the government? This way you have to know a user's password / login info to steal their information if you steal their computer (for the average theif). You still get their laptop, but you don't get their personal information without their login information.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: