Hacker Newsnew | past | comments | ask | show | jobs | submit | jpdb's commentslogin

That particular benefit has no value if you still need to support v4.

It's almost a self-inflicted tragedy of the commons or reverse network-effect.

Adopting IPv6 doesn't alleviate the pain of IPv4 exhaustion if you still need to support dual-stack.


It still helps. I have a 1U in a colo which gives me a /64 for ipv6 and ~5 addresses for ipv4. I just set up a dual stack kubernetes cluster on 6 virtual machines. When I want to ssh into one of the machines, my options are either:

  1. Use IPv6 which works and goes directly to the virtual machine because each virtual machine grabs its own address from one of my 18446744073709551616 addresses.
  2. Use IPv4 and either have to do a jumphost or do port forwarding, giving each virtual machine its own port which forwards to port 22 on the virtual machine.
  3. Use a VPN.
I have all 3 working, but #1 was significantly less setup and works the best.

Also being able to generate unique ULA subnets is super nice.


Really using port 22 is very ill advised anyway because you will get constant nuisance brute force attacks (accomplishing nothing because you're using keys or certificates I hope) but still eating up cycles for the crypto handshake.

By that same logic, using IPv4 is ill-advised because I could easily give the ssh endpoints their own IPv6 addresses, avoiding the need to hide behind non-standard ports. Scanning through 18446744073709551616 addresses is going to be a lot slower than scanning through 65536 ports.

You don't put your server IP in your DNS? You type the IPv6 address every time?

A lot of servers expose something public so they can be found. Otherwise what's the point of being publicly accessible?


You can't just list out all the DNS names. The three ways that names get discovered are:

1. You listen on IPv4 and someone probes all the IPv4 space and your server announces "Hi, I am web123.example.com" or similar in its responsible

2. You have HTTPS on the server and the HTTPS address ends up in the certificate transparency logs.

3. You have a public service on that server and announce the address somewhere.

But when you have billions of IP addresses, why does SSH need to listen on the same address as HTTPS or anything you're running publicly? It's also infeasible to probe the entirety of IPv6 space the way you can probe all of IPv4, even though we're only assigning addresses in 3/65535 of it right now.


I've had SSH open on a static v6 that isn't even SLAAC or temporary, it's not my/58::1 but not far off and in DNS, and I have not in 8 years seen a single scan or connection attempt over IPv6 (other than myself). This is not to say there is no risk, but it really is a night and day difference.

Really? I get somewhere in the region of none to barely any, depending on the server.

I mean, yes, you'll get a constant stream of them on IPv4, but why would you run a server on v4 unless you absolutely needed to? The address space is so small you can scan every IP in 5 minutes per port, and if you have my v4 address you can enumerate every single server I'm running just by scanning 65k ports.

Meanwhile, on v6, even the latter of those takes a thousand years. How would people even find the server?


If you are an ISP running dual stack ipv4 with NAT plus ipv6, the more connections happen via ipv6 and the more traffic happens via ipv6, the better, because it doesn't have to go through the NAT infrastructure which is more expensive, and cost scales with traffic (each packet needs its header to be modified) and number of parallel open connections (each public v4 address gives you only 65k port numbers, plus this mapping needs to be stored in RAM and databases).

NAT accelerated hardware exists almost everywhere now. But yes NAT is a pita overall. CGNAT is even more of a problem.

I was mostly thinking about CGNAT instead of NAT around your home network.

There is a talk by Dmitriy Melnik at RIPE 91 about the costs for ISPs to not adopt ipv6 vs to adopt ipv6 (relevant stuff starts at 9:55).

https://ripe91.ripe.net/programme/meeting-plan/sessions/37/8...


Not really, this is only true for mobile devices.

7621 devices include hardware NAT. And anything Qualcomm in the recent past does. Most home WiFi 5 and above routers can do hardware NAT just fine. Hardware NAT allows for using cheap and old cpus for CPE. ISP hardware is a different story. Some decent routers that can do that which don’t cost a lot.

https://www.reddit.com/r/openwrt/comments/1lopamn/current_hi...


> Not really, this is only true for mobile devices.

Tell that to my fixed line provider, with their CGNAT ... And its just about every provider in Germany pulling that crap. O, and dynamic IPv6 pre-fix also, because can't have you run any servers!

Yes, plenty of ways to bypass it but when you have ISP's still stuck in 1990's attitude, with dynamic IPv4/IPv6, limited upload (1/3 to 1/5 of your download), etc ...


> Adopting IPv6 doesn't alleviate the pain of IPv4 exhaustion if you still need to support dual-stack.

Sure it does: the more server-side stuff has IPv6 the fewer IPv4 addresses you need.

If you have money (or were around early in the IPv4 land grab) you have plenty of IPv4 addresses so can give each customer one to for NATing. But if you don't have money to spend (many community-based ISPs) you have to start sharing addresses (16:1 to 64:1 is common in MAP-T deployments). You also have to spend CapEx on CG-NAT hardware to handle traffic loads.

Some of the highest bandwidth loads on the Internet are for video, and Youtube/Google, Netflix, and MetaBook all support IPv6: that's a lot of load that can skip the CG-NAT if the client is given a IPv6 address.

If you can go from 1:1 to 16:1 (or higher) because so few things use IPv4 that means every ISPs can reduce their legacy addressing needs.


On company/university wifi networks, v6 cuts your v4 DHCP pool address usage by something like 70%, without hurting connectivity to v4 hosts.

You can run a V6 first network with a tiny bit of v4 sprinkled in on the edge where it's needed. The tech to do this is mature and well understood.

> A sharp knife is SAFER than a dull one.

I do a lot of cooking and own quite a few kitchen knives, most of which have bitten me at some point. I understand the idea around sharp knives being safer...but I don't agree.

If a razor sharp 210mm Japanese carbon steel knife touches your finger, it's split open and might need stitches or glue. A less-sharp knife would need more weight behind it to cut effectively which can lead to you completely severing a finger, but simple slices are a much more likely scenario than your finger being completely under the knife to the point where it's effectively a digit-guillotine.


Knife sharpness safety is a bell curve.

If your knife is sharp enough you will eventually cut the shit out of yourself because it slices so easily. You’re essentially waving around an 8 inch razor blade.

If your knife is dull enough you will eventually cut the shit out of yourself because it takes so much effort to cut that a slip becomes a stab. The amount of effort you have to put in to do basic stuff like cut carrots can be high enough that give up some control of the blade.

A knife at a good level of sharpness will cut with reasonable effort but not be a giant razor blade. I think for most people this is likely the safest level of sharpness.


Oh man! This brings memories. I had a new set in a new place and dealing with sub 20 degree Celsius for the first time. The cold would numb my hand the blade would cut and I would know only after a few minutes. I spent those first couple of months constantly putting band aid on. I blamed it fully on the winter.

It’s been almost 1.5 years since the last cut and I now realize what was going on

Edit: Now that I realize this thread is going sort of sharp-vs-dull. I still use the slide sharpener and regularly sharpen the knives. The factory sharpness was just too much for me. I think a knife sharpened to appropriate level is the way to go. And a dull one is probably as dangerous as a overly sharp one


> If your knife is dull enough you will eventually cut the shit out of yourself because it takes so much effort to cut that a slip becomes a stab.

Also, a dull knife will, 100%, slip more in use than a sharper knife.


Absolutely.

There's no reason your fingers should be under the blade while it's in motion. That's just poor technique.

If your blade is dull enough you’ll be using excess force to cut. People cut themselves regularly because they are using too much force and the thing they are trying to cut shifts and suddenly they have a finger under the blade. Or they are working with a dull paring knife and having to use too much force and it suddenly cuts and keeps going into their thumb.

Not everyone is a chef. I guess 80% of people in the world have poor technique for cutting stuff but they mostly get away not cutting themselves because they have dull knives.

I recently had to glue my thumb back on after I lopped it off with a Japanese knife while I was dicing vegetables. At my age, I have probably moved that knife millions of times and only cut myself once. Nobody can have a perfect record.

Had a friend do that recently. Knife freshly sharpened, took a dime sized hunk of his thumb right off. They stitched it back on, mostly to protect what was left underneath while it healed.

I agree.

There was a long thread here where people were arguing about this topic.

My take is that people saying sharp knives are safer don’t understand how average people are using knives.

Totally different than in restaurant setting or ‘self proclaimed chef’ setting where you are going to chop loads of stuff fast or you get angry customers or you take pride in your chopping and slicing skills.

Worst offenders were sharpening knives for other people and then they were surprised that those people would cut themselves with sharp knives… none of the story included a person who was perfectly happy with their dull knife cutting themselves with that dull knife.


> My take is that people saying sharp knives are safer don’t understand how average people are using knives.

Sharp knives are safer.

Bad knife technique is unsafe, regardless of sharpness, but with a dull knife you lack control even with good technique.

> none of the story included a person who was perfectly happy with their dull knife cutting themselves with that dull knife.

People that are perfectly happy with dull knives cut themselves with those dull knives all the time. Sometimes, that's the spur for people learning how to use a knife and becoming unhappy with dull knives.


I get where you are coming from but at the same time if you use a potentially dangerous tool you should learn how to properly use it.

Just using a claw grip will significantly reduce your chance of injury.

I have seen more injuries from dull knives slipping on vegtable skin than too sharp knives.

That said, the mirror shine finish some enthusiasts go for is indeed over the top.


I'm skeptical of the claim too. All the arguments are logical but there doesn't seem to be any evidence. Have there been any studies?

I googled around and the best I could find was https://skeptics.stackexchange.com/questions/23661/are-sharp... (11 years ago) which could only speculate that sharp knives reduce RSI.


You can also use passkeys so you aren't tied to a centralized SSO provider.


... after i sign up for the service with a google/microsoft/whatever account, i suppose.


But if you make something that's as smart as a sloppy intern and costs even a small amount less, you're making an unbelievable amount of money off it.


Are you sure you wouldn't just lower the market price for an intern and the intern would still be at least at the same price?

Also an intern gains experience, while it's wage doesn't automatically.


Intern needs breaks and vacation days. AGI don’t


When our main comparison is cost per work hour, that doesn't matter a whole lot.


You could just disable cdn/caching.


I generally prefer tailscale and trust them more than cloudflare to not rug-pull me on pricing, but the two features that push me towards cloudflared is the custom domains and client-less access. I could probably set it up with caddy and some plugins, but then I still need to expose the service and port forward.


I'm definitely not trying to dissuade anyone from using Cloudflare, just making sure people realize the potential privacy implications of doing so. It isn't always obvious, even though some of the features pretty much require it (at least to be handled entirely on Cloudflare's side. You could implement similar features that are split between the endpoint and the coordination server without requiring full TLS stripping. Maybe Tailscale will support some of those as features of the `serve` server?)

> client-less access

JFYI, Tailscale Funnels also work for this, though depending on your use case it may not be ideal. Ultimately, Cloudflare does handle this use case a bit better.


Tailscale funnels do work, but it's public only. No auth.


Yeah, because the auth can't be done on Tailscale's end if they don't terminate the TLS connection. However, it is still possible to use an authentication proxy in this situation. Many homelab and small to medium size company setups use OAuth2 Proxy, often with Dex. If you wanted to get fancier, you could use Tailscale for identity when behind the firewall and OAuth2 Proxy when outside the firewall.

This may seem like a lot of effort and it is definitely not nothing, but Cloudflare Tunnels also has a decent number of moving parts and frankly their authentication gateway leaves a bit to be desired for home users.


Tailscale ‘serve’ works well at my startup. SSL and DNS still but unlike funnel it’s limited to your tailscale network.


> I could probably set it up with caddy and some plugins, but then I still need to expose the service and port forward.

Not so! I have custom domains on my Tailnet with Caddy. You just need to set up Caddy to perform the ACME DNS challenge to get certs for your domain (ironically I use Cloudflare for DNS because they make it very easy to set this up with just an API key). No plugins or open ports needed.


That's a fair personal decision, but if I would have to put money on it I'd say the chances of new company that raised 160 million of VC funding this year alone vs. established profitable company with a track record of offering free services for many years already I'd put my money on the latter.


I have the same z13 gen one, but my m1 is a Pro.

I do absolutely love the z13 and prefer it most of the time...but I definitely would not call the battery life "all day" or even "almost all day"


Have you configured your tunables with powertop, set amd_pstate = active (and/or set up TLP)? If not, give that a try, it's a game changer.

Also by all day I meant working day (8+ hours), which is good enough for me to take my laptop off-site and work without a battery. Still falls a good bit short from the Apple Silicon MacBook or course, can't really compete with that until we get a decent Linux-native ARM notebook (unless you count Chromebooks).


> Have you configured your tunables with powertop, set amd_pstate = active (and/or set up TLP)? If not, give that a try, it's a game changer.

The nice thing about Macs is that you don't really have to do that...


Web performance is probably/mostly valued as efficiently as it needs to be.

The numbers mentioned in the article are...quite egregious.

> Oh, Just 2.4 Megabytes. Out of a chonky 4 MB payload. Assuming they could rebuild the site to hit Alex Russell's target of 450 KB, that's conservatively $435,000,000 per year. Not too bad. And this is likely a profound underestimation of the real gain

This is not a "profound underestimation." Not by several orders of magnitude. Kroger is not going save anywhere even remotely close to $435 million dollars by reducing their js bundle size.

Kroger had $3.6-$3.8 billion in allocated capex in the year of 2024. There is no shot javascript bundle size is ~9% of their *total* allocated capex.

I work with a number of companies of similar size and their entire cloud spend isn't $435,000,000 -- and bandwidth (or even networking all up) isn't in their time 10 line items.

A leak showed that Walmart spent $580m a year on Azure: https://www.datacenterdynamics.com/en/news/walmart-spent-580...

These numbers are so insanely inflated, I think the author needs to rethink their entire premise.


it's not just their direct cost, it's also the loss of revenue. the author wasn't arguing that they could save 435 million dollars in server costs.

Instead they were arguing that in addition to saving maybe a million or two in server costs, they would gain an additional 435 million dollars in revenue because less people would leave their website


Bizarre that this had to be spelled out...


You should be able to get this done with OBS.

Set OBS up so your streaming a window of this application.

Go into OBS settings and go to "Stream" and set it to custom.

For server use "srt://127.0.0.1:7777?mode=listener&timeout=50000&transtype=live"

Then in VLC, open a network stream for srt://127.0.0.1:7777.


This article seems to focus on the shortcomings of LLMs being wrong, but fails to consider the value LLMs provide and just how large of an opportunity that value presents.

If you look at any company on earth, especially large ones, they all share the same line item as their biggest expense: labor. Any technology that can reduce that cost represents an overwhelmingly huge opportunity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: