Hacker Newsnew | past | comments | ask | show | jobs | submit | DrewADesign's commentslogin

If you had used the search feature you’d realize that many similar comments have already been posted on HN. Vote to close.

If only those who voted to close would bother to check whether the dup/close issue was ACTUALLY a duplicate. If only there were (substantial) penalties for incorrectly dup/closing. The vast majority of dup/closes seem to not actually be dup/closes. I really wish they would get rid of that feature. Would also prevent code rot (references to ancient versions of the software or compiler you're interested in that are no longer relevant, or solutions that have much easier fixes in modern versions of the software). Not missing StackOverflow in the least. It did not age well. (And the whole copyright thing was just toxically stupid).

You’re either arguing about semantics or missed the point they were trying to make. If it doesn’t have to be publicly reachable, why should it be publicly addressable in the first place? I can’t think of any common requirement that will be afforded to users having devices that will never need to be publicly reachable be publicly addressable. Considering most peoples use cases solely involve home networks of devices that they definitely do not want to be publicly reachable, why is needing to explicitly disallow that better for them?

In non-abstract terms, I just don’t see how that works better.


> I can’t think of any common requirement that will be afforded to users having devices that will never need to be publicly reachable be publicly addressable.

Because you do not know ahead of time which devices may have such a need, and by allowing for the possibility you open up more flexibility.

> [Residential customers] don't care about engineering, but they sure do create support tickets about broken P2P applications, such as Xbox/PS gaming applications, broken VoIP in gaming lobbies, failure of SIP client to punch through etc. All these problems don't exist on native routed (and static) IPv6.

> In order for P2P to work as close as possible to routed IPv6 in NATted IPv4, we had to deploy a bunch of workarounds such as EIM-NAT to allow TCP/UDP P2P punching to work both ways, we had to allow hairpinning on the CGNAT device to allow intra-CGNAT traffic to work between to CGNAT clients, as TURN can only detect the public-facing IP:Port, hairpinning allow 100.64.0.0/10 clients to talk to each other over the CGNATted public IP:Port.

* https://blog.ipspace.net/2025/03/response-end-to-end-connect...

By having (a) a public address, and (b) a CPE that supports PCP/IGD hole punching, you eliminate a whole swath of infrastructure (ICE/TURN/etc) and kludges.

When it was first released, Skype was peer-to-peer, but because of NAT "super nodes" had to be invented in their architecture so that the clients/peers could have someone to 'bounce' off of to connect. But because of the prevalence of NAT, central servers are now the norm.

A lot of folks on HN complain about centralization and concentration on the Internet, but how can it be otherwise when folks push back against technologies that would allow more peer-to-peer architectures?


It's baffling to argue that NAT is the real driver of centralization for internet technologies.

It surely was a big factor.

When internet finally became popular, hosting a website on your own machine already became infeasible.


What do you mean by popular? I hosted a site on a home machine in the early teens. If you don't know how to do that with NAT, you should not have a web server under your control exposed to the internet.

The early teens didn’t have huge proliferation of ISPs using CGNATs.

These days ISP can’t get hold of new IPv4 blocks, and increasingly don’t provide public IP addresses to residential routers, not without having to pay extra for that lowly single IPv4 address.

Hosting a website behind a NAT isn’t as trivial as it used to be, and for many it’s now impossible without IPv6.


>for many it’s now impossible without IPv6.

It's impossible with ipv6 either. ISPs block incoming connections on ipv6 for residential addresses.


And against the ToS of every US residential ISP I’ve looked at.

> It's baffling to argue that NAT is the real driver of centralization for internet technologies.

It doesn't help.


What is then?

Capitalism, essentially. Companies can make more money from centralized control over systems than from truly distributed systems, and customers are suckers for the simplicity of delegating their needs to single providers.

The reason Google bought and destroyed dejanews.com, for example (try visiting that site) was to weaken one of the distributed sources of competition. Similar for RSS.


> by allowing for the possibility you open up more flexibility.

The problem is that flexibility is often the enemy of security, and that’s certainly true here. Corporate networks don’t want to allow even the possibility of devices that are supposed to be private being publicly addressable. Arguing that it’s “simpler” or “more flexible” is like arguing that we don’t need firewalls, for the same reasons. And in fact, that argument used to be made quite regularly. It’s just that no-one who deals with security has ever taken it seriously.


I'd like to know the average number of broadband customers that make support tickets because of NAT. I'll bet it's far less than 1%. And you really think NAT, rather than SV betting huge on cloud services and surveillance capitalism, was the reason that everything is centralized? Come on...

To be fair, your old feature phones don’t do the same thing as a modern smart phone— you just aren’t interested in doing things they aren’t capable of. I have very different use cases.

Digital Ocean didn’t even have an ipv6 address on by default in the droplet I created last week. It’s just a switch to flip, but I’ll bet the support costs of hobbyists/enthusiasts not realizing they needed to also write firewall rules, make sure ports weren’t open for databases and things like that for ipv6.

My memory of IPv6 is getting waves of support tickets from people who took their (already questionable) practice of blocking ICMP on IPv4, blocked ICMPv6, and then got confused when IPv6 stopped working.

The legacy of the Ping of Death and redirect abuse still looms over people that may not have been born yet :)

It's a "just doesn't work" experience every time that I try it and I don't experience any value from it, it's not like there isn't anything I can connect to on IPv6 that I can't connect to on IPv4.

My ISP has finally mastered providing me with reliable albeit slow DSL. Fiber would change my life, there just isn't any point in asking for IPv6.

Also note those bloated packets are death for many modern applications like VoIP.


Exactly. Spectrum delivers good IPv6 service in my area. I tried it when I upgraded my gateway. All of my devices are assigned 4 IPv6 IPs, hostnames are replaced by auto assigned stuff from the ISP, and lots of random things don’t work.

I went from being pumped to learn more to realizing I’m going to invest a lot of time and I could not identify and tangible benefit.


The biggest tangible benefit is you don't need to worry about NAT port mapping any more. Every device can have a public address, and you can have multiple servers exposing services on the same port without a conflict.

(The flip side is having a network-level firewall is more important than ever.)

You also don't have to worry about running a DHCP server anymore, at least on small networks. The simplicity of SLAAC is a breath of fresh air, and removes DHCP as a single point of failure for a network.


So the benefit is that you dont need to worry about NAT for a couple of port forwarded services you may use (which might well even use UPnP for auto setup), but the tradeoff is you now need to think about full individual firewall protection for every device on your network?

I'll take full security by default and forward a couple of ports thankyou!


Few people care about exposing a server in the first place, even fewer care about multiple servers on a single port.

> All of my devices are assigned 4 IPv6 IPs

Loopback, link local and network assigned. What's that problem? Your ipv4 hosts are can reach themselves through millions of addresses already.

> hostnames are replaced by auto assigned stuff from the ISP

Hostnames replaced? IPv6 doesn't do DNS...

> lots of random things don’t work.

Lots of random things also don't work on ipv4. :)


You can maybe connect to everyone over IPv4, but chances are that that path is strictly worse (in terms of latency, P2P reachability, congestion et.c) than a v6 one would be.

For example, two IPv6 peers can often trivially reach each other even behind firewalls (using UDP hole punching). For NAT, having too restrictive a NAT gateway on either side can easily prevent reachability.


I have tailscale on all my mobile/portable devices I use away from home. It punches holes so I don't have to, even makes DNS work for my tailnet in a way I've never been able to get to work the way I want the normal way.

Yes, Tailscale is great, and it does manage to traverse pretty much every firewall or NAT in my experience as well. Quite often, it even does so using IPv6 :)

> those bloated packets are death for many modern applications like VoIP.

Huh? The packet sizes aren’t that much different and VOIP is hardly a taxing application at this point anyway. VOIP needs barely over dial-up level bandwidth.


It's not the bandwidth it's the latency. Because of the latency you need to pack a small amount of data in VoIP packets so the extra header size of IPv6 stings more than it would for ordinary http traffic

https://www.nojitter.com/telecommunication-technology/ipv6-i...


I have a lot of trouble believing IPv6 matters here. Your link only talks about bandwidth (an extra 8kbps) and doesn’t even mention latency.

Edit: NAT also adds measurable latency. If anything I’d think avoiding NAT might actually make IPv6 lower latency than IPv4 on average.


Last time I looked at Digital Ocean they had completely missed the purpose of IPv6 and would only assign a droplet a /124 and even then only as a fixed address like they were worried we are going to run out of addresses.

But really what's the point of giving half an internet worth of addresses to every machine? I never understood that part of IPv6.

I think it would have been better having shorter addresses and not waste so many on every endpoint.


Because 2^128 is too big to be reasonably filled even if you give a ip address to every grain of sand. 64 bits is good enough for network routing and 64 bits for the host to auto configure an ip address is a bonus feature. The reason why 64 bits is because it large enough for no collisions with picking a ephemeral random number or and it can fit your 48 bit mac address if you want a consistent number.

With a fixed size host identifier compared to a variable size ipv4 host identifier network renumbering becomes easier. If you separate out the host part of the ip address a network operator can change ip ranges by simply replacing the top 64 bits with prefix translation and other computers can still be routed to with the unique bottom 64 bits in the new ip network.

This is what you do if you start with a clean sheet and design a protocol where you don't need to put address scarcity as the first priority.


Thanks for this. It's pointless to argue, but I wonder if shifting from 32 to 64 bits, instead 128, would have seen faster uptake.

Aside, isn't embedding MAC addrs in ones IP address a bad idea?


Yeah, the current system is really weird, with many address assigning services refusing to create smaller pools. I really hope that's fixed one day. We already got an RFC saying effectively "going back to classful ranges was stupid" https://datatracker.ietf.org/doc/html/rfc6177 (for over a decade...)

Point of fact it's giving 4 billion Internets worth of addresses to every local subnet.

You will sometimes see admins complain that IPv6 demands that you allow ICMP (at least the TOOBIG messages) through the firewall because they're worried that people on the internet will start doing pingscans of their network. This is because they do not understand what 2^64 is.


And won't that allow pingscans?

Do the math on 2^64 possible host addresses, multiply by the length of an IPv6 ICMP ECHOREQUEST, and then divide by available bandwidth to determine how long it might take you to scan a single subnet.

Hint: the ICMPv6 packet is no shorter than 48 bytes and there are 1.8446744e+19 addresses to scan.


"Simple" VPS providers like DigitalOcean, etc. really need to get the hell onboard with network virtualization. It's 2026, I don't want to be dealing with individual hosts just being allocated a damned /64 either. Give me a /48, attach it to a virtual network, let me split it into /64's and attach VM's to it - if I want something other than SLACC addresses (or multiple per VM) then I can deal with manually assigning them.

To be fair, the "big" cloud providers can't seem to figure this shit out, either. It's mind boggling, I'm not saying I've gone through the headache of banging out all the configuration to get FRRouting and my RouterOS gear happily doing the EVPN-VXLAN dance; but I'm also not Amazon, Google, or Microsoft...


I use IPv6 on my authoritative DNS servers and that's basically it. To your point keeping it disabled on all my hobby crap keeps everything simple for me. If someone can not reach IPv4 then something is broken on their end.

You’re absolutely right! You astutely observed that 2025 was a year with many LLMs and this was a selection of waypoints, summarized in a helpful timeline.

That’s what most non-tech-person’s year in LLMs looked like.

Hopefully 2026 will be the year where companies realize that implementing intrusive chatbots can’t make better ::waving hands:: ya know… UX or whatever.

For some reason, they think its helpful to distractingly pop up chat windows on their site because their customers need textual kindergarten handholding to … I don’t know… find the ideal pocket comb for their unique pocket/hair situation, or had an unlikely question about that aerosol pan release spray that a chatbot could actually answer. Well, my dog also thinks she’s helping me by attacking the vacuum when I’m trying to clean. Both ideas are equally valid.

And spending a bazillion dollars implementing it doesn’t mean your customers won’t hate it. And forcing your customers into pathways they hate because of your sunk costs mindset means it will never stop costing you more money than it makes.

I just hope companies start being honest with themselves about whether or not these things are good, bad, or absolutely abysmal for the customer experience and cut their losses when it makes sense.


They need to be intrusive and shoved in your face. This way, they can say they have a lot of people using them, which is a good and useful metric.

I took the good with the bad: the ai assisted coding tools are a multiplier, google ai overviews in search results are half baked (at best) and often just factually wrong. AI was put in the instagram search bar for no practical purpose etc.

Yeah totally. The point I’m trying to make, however, is that most people don’t code, so they didn’t get the multiplier, and only got the mediocre-to-bad, with a handful of them doing things like generating dumb images for a boost. I think that’s why a lot of people in the software business are utterly bewildered when customers aren’t jumping for joy when they release a new AI “feature.” I think a lot of what gets classified as cynical ceo enshittification is really people ignoring basic good design practices, like making sure you’re effectively helping customers solve an actual problem in a context and with methods they, at least, don’t hate. Especially on the smaller scale, like indie app developers who probably get more out of AI than most, they really think people are going to like new AI features simply because they’re new AI features. They’re very wrong.

As much as I side with you on this one, I really don't think this submission is the right place to rant about it.

this thread is for pro-LLM propaganda only.

do not acknowledge that everyone in the world thinks this shit is a complete and total garbage fire


> For some reason, they think its helpful to distractingly pop up chat windows on their site...

Companies have been doing this "live support" nonsense far longer than LLMs have been popular.


There was also source point pollution before the Industrial Revolution. Useless, forced, irritating chat was ‘nowhere close’ to as aggressive or pervasive as it is now. It used to be a niche feature of some CRMs and now it’s everywhere.

I’m on LinkedIn Learning digging into something really technical and practical and it’s constantly pushing the chat fly out with useless pre-populated prompts like “what are the main takeaways from this video.” And they moved their main page search to a little icon on the title bar and sneakily now what used to be the obvious, primary central search field for years sends a prompt to their fucking chatbot.


Maybe there should be some kind of annual ISO privacy certification for companies that resell any customer data in any form. Then make data customers (e.g. marketing agencies, major retailers) and data collectors (e.g. those that collect telemetry data from libraries included in their app, auto manufacturers, wireless providers) civilly liable for any privacy violations dealing with uncertified brokers, making sure there’s an uncapped modifier based on the company’s annual revenue. That seems like it puts the bulk of the compliance responsibility on the parties that can do the most wide-scale damage with unethical and dodgy practices, while leaving some out there for others that need incentive to not ignore the rules.

Haven’t really thought this through and I’m not a policy wonk… just spitballin’.


Bonding and/or insurance.

Make this cost and practices will change.


Yeah good call.

I would hope for something stronger. Put a currency value on some kinds of info. To store my SSN and full name and military ID totals 20 units. Maybe a full name and home address is 15 units. If I agree to give you my info, you agree that I can keep the CEOs home address, stored as safely and hygienically as I can. Part of our contract mandates when we mutually delete. Because of course we trust each other.

Sure, but that will never happen, and we shouldn't let perfect be the enemy of good.

> Maybe there should be some kind of annual ISO privacy certification for companies that resell any customer data in any form

Why is this better than requiring deletion?


For starters, it provides protection and accountability for those who don't have the prior presence of mind to demand deletion.

An act which mandated deletion in all cases for data once business needs are addressed (often 30--90 days for much data), might address your question. But the Delete Act isn't that.


> it provides protection and accountability for those who don't have the prior presence of mind to demand deletion

Perhaps. I just see another compliance-industrial tax on consumers backed up by a nonsense checklist.

> act which mandated deletion in all cases for data once business needs are addressed (often 30--90 days for much data), might address your question

Or opt out by default.

Perhaps California should give counties the power to do that. Then we can watch the experiment for unintended consequences.


I work in a specialty in an industry that requires a fairly stringent annual ISO certification. Even preparing for the audit it is a completely worthwhile exercise in seeing things that maybe got swept under the rug or left by the wayside. Customers having clearly defined criteria to prove in court or even business negotiations, that our lapse was negligent or in bad faith keeps us from straying too far to begin with. Our having clear criteria to show that we followed industry guidelines shuts down customers trying to accuse us of something in bad faith, or even trying to make a mountain out of a molehill to get leverage in a contract negotiation or something.

I’ll bet most of it depends on how good the certification is. My bosses think it’s annoying, and sure not 100% of the requirements make a difference for us, but most do, and from my vantage point, I can see how much of a difference it makes.


This is a family-run business with about 20 employees BTW. Not some red tape behemoth.

> compliance-industrial tax on consumers backed up by a nonsense checklist.

That's...a really weird phrase. Efficient regulation isn't a tax on consumers, it's protection against unchecked immoral corporations.


Can’t even remember if Zuck testified before the Congress or Senate, but his super weird fucking haircut on that day is indelibly etched into my brain. So a stylist is probably a solid strategic choice for him.

Legend has it that Meta was having difficulty making their metaverse Mii characters look human, so Zuck solved that problem by making himself look like a Mii.

That was his biggest win in quite some time if true. Really hit the mark. My guess is he refused to acknowledge his receding hairline so he just kept having hard "bangs" cut further and further back.

Industry-wide? Looks damn near pan-industry to me

And most normal people are fed up with it. Nobody understands why most of their apps suddenly have chatbots in them now.

When a company whose services I use announces that they're adding AI to them, my first response is always to wonder how I can turn it off.

I don’t even bother looking anymore because it’s rarely possible.

This guy would probably cancel, anyway, because his friend didn't want to pay him for driving them to the restaurant.

I'm giving the pleasure of being in my incredible presence why should I do that for free?

I'm reminded of this story in the New Yorker: https://archive.ph/wNydh

I've seen many people muse that we might be in peak dystopia over the years. I wish any of them were right, but none were.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: