Hacker Newsnew | past | comments | ask | show | jobs | submit | herpderperator's commentslogin

If those other applications use their own local GPS clocks, what is the significance of NIST (and the 5μs inaccuracy) in their scenario?


GPS gets its time from NIST (though during this incident they failed over to another NIST site, so it wasn't impacted).


That is not correct at all. How did you arrive at that conclusion?

GPS has its own independent timescale called GPS Time. GPS Time is generated and maintained by Atomic clocks onboard the GPS satellites (cesium and rubidium).


It has its own timescale, but that still traces back to NIST.

In particular, the atomic clocks on board the GPS satellites are not sufficient to maintain a time standard because of relativistic variations and Doppler effects, both of which can be corrected, but only if the exact orbit is known to within exceeding tight tolerances. Those orbital elements are created by reference to NIST. Essentially, the satellite motions are computed using inverse GPS and then we use normal GPS based on those values.


> It has its own timescale, but that still traces back to NIST.

GPS gets its time from the US Naval Observatory:

> Former USNO director Gernot M. R. Winkler initiated the "Master clock" service that the USNO still operates,[29][30] and which provides precise time to the GPS satellite constellation run by the United States Space Force. The alternate Master Clock time service continues to operate at Schriever Space Force Base in Colorado.

* https://en.wikipedia.org/wiki/United_States_Naval_Observator...

The USNO does not seem to sync with NIST:

> As a matter of policy, the U.S. Naval Observatory timescale, UTC(USNO), is kept within a close but unspecified tolerance of the international atomic timescale published by the Bureau International des Poids et Mesures (International Bureau of Weights and Measures [BIPM]) in Sevres, France. The world's timing centers, including USNO, submit their clock measurements to BIPM, which then uses them to compute a free-running (unsteered) mean timescale (Echelle Atomique Libre [EAL]). BIPM then applies frequency corrections ("steers") to EAL, based on measurements from primary frequency standards and intended to keep the International System's basic unit of time, the second, constant. The result of these corrections is another timescale, TAI (Temps Atomique International or International Atomic Time). The addition of leap seconds to TAI produces UTC. The world's timing centers have agreed to keep their real-time timescales closely synchronized ("coordinated") with UTC. Hence, all these atomic timescales are called Coordinated Universal Time (UTC), of which USNO's version is UTC(USNO).

* https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N...

The two organizations do seem to keep an eye on each other:

> The United States Naval Observatory (USNO) and the National Institute of Standards and Technology (NIST) make regular comparisons of their respective time scales. These comparisons are made using GPS common-view measurements from up to approximately 10 GPS satellites. The table below lists recent differences between the two time scales.

* https://www.nist.gov/pml/time-and-frequency-division/time-se...


I think GP might’ve been referring to the part of Jeff’s post that references GPS, which I think may be a slight misunderstanding of the NIST email (saying “people using NIST + GPS for time transfer failed over to other sites” rather than “GPS failed over to another site”).

The GPS satellite clocks are steered to the US Naval Observatory’s UTC as opposed to NIST’s, and GPS fails over to the USNO’s Alternate Master Clock [0] in Colorado.

[0] https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N...


I find this stuff really interesting, so if anyone's curious, here's a few more tidbits:

GPS system time is currently 18s ahead of UTC since it doesn't take UTC's leap seconds into account [0]

This (old) paper from USNO [1] goes into more detail about how GPS time is related to USNO's realization of UTC, as well as talking a bit about how TAI is determined (in hindsight! - by collecting data from clocks around the world and then processing it).

[0] https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N... [1] https://ntrs.nasa.gov/api/citations/19960042620/downloads/19...


> If those other applications use their own local GPS clocks, what is the significance of NIST (and the 5μs inaccuracy) in their scenario?

Verification and traceability is one reason: it's all very well to claim you're with-in ±x seconds, but your logs may have to say how close you are to the 'legal reality' that is the official time of NIST.

NIST may also send out time via 'private fibre' for certain purposes:

* https://en.wikipedia.org/wiki/White_Rabbit_Project

'Fibre timing' is also important in case of GNSS signal disruption:

* https://www.gpsworld.com/china-finishing-high-precision-grou...


Can someone explain exactly what's happening here? https://github.com/nadimkobeissi/16iax10h-linux-sound-saga/i...

It seems like there's a lot of personal information being asked for / thrown around... including a debit/credit card number?

Is there no better way to handle the bounty payment?


That would cause your active connections to break because the source IP changed entirely. Are you sure the IP changes abruptly, or they keep it for as long as the session is live? Though keeping the original IP would mean that, for example, if you are sailing around the world, you'd start getting worse and worse latency as all your data continues going to the original ground station which may be on the other side of the world at that point.

An interesting problem - I wonder what they truly do here. I suppose people expect interruptions with Starlink so doing an IP swap wouldn't be all that different to losing service due to obstruction for a few minutes.


IP addresses change all the time. It changes when connect to WiFi, it changes when enter new country, it changes when provider gives you new address. I cant tell if changes on mobile, it looks like mobile providers hand off to next tower, but there must be a limit of how far can go before routing breaks.

Everything retries cause there isn’t difference between new address or bad connection. Most of time we don’t notice cause not using device. Or because most connections are short lived.


I'm aware that the public IP changes when a phone (on which one hardly has much control over how things run anyway), switches from cellular to a WiFI network.

Your comments are more practical (and maybe aimed at a layman's use of Starlink) but I am talking about the theory of Starlink supposedly interrupting a perfectly-working connection in order to change your IP, which interrupts everything, by design of TCP/conntrack. Whether that operation is fatal or not due to retries or whatever else is not my point at all.

Also, ISPs at home don't randomly disconnect you to give you a new IP. They may give you a new IP when you disconnect and reconnect for other reasons, but they should never dump your connection on purpose just to give you a new IP for no reason. That's not good design at all, hence the question about how Starlink handles wanting to give you a new IP.


Serious question: If it's an improved 2.5 model, why don't they call it version 2.6? Seems annoying to have to remember if you're using the old 2.5 or the new 2.5. Kind of like when Apple released the third-gen iPad many years ago and simply called it the "new iPad" without a number.


That's why people called the second version of Sonnet v3.5 simply v3.6, and Anthropic acknowledged that by naming the next version v3.7


Only Anthropic has a slightly understandable version scheme.


It's pretty common to refer to models by the month and year they were released.

For example, the latest Gemini 2.5 Flash is known as "google/gemini-2.5-flash-preview-09-2025" [1].

[1]: https://openrouter.ai/google/gemini-2.5-flash-preview-09-202...


If they're going to include the month and year as part of the version number, they should at least use big endian dates like gemini-2.5-flash-preview-2025-09 instead of 09-2025.


Or, you know, just Gemini 2.6 Flash. I don't recall the 2.5 version having a date associated with it when it came out, though maybe they are using dates now. In marketing, at least, it's always known as Gemini 2.5 Flash/Pro.


It had a date, but I also agree this is extremely confusing. Even semver 2.5.1 would be clearer IMO.


It always had dates... They release multiple versions and update regularly. Not sure if this is the first 2.5 Flash update, but pretty sure Pro had a few updates as well...

This is also the case with OpenAI and their models. Pretty standard I guess.

They don't change the versioning, because I guess they don't consider it to be "a new model trained from scratch".


>For example, the latest Gemini 2.5 Flash is known as "google/gemini-2.5-flash-preview-09-2025" [1].

That "example" is the name used in the article under discussion. There's no need to link to openrouter.ai to find the name.


I'm pretty sure Google just does that for preview models and they drop the date from the name when it's released.


If only there was some of versioning nomenclature they could use. Maybe even one that is … semantic? Oh how I wish someone would introduce something like this to the software engineering field. /s

In all seriousness though, their version system is awful.


2.5 is not the version number, it's the generation of the underlying model architecture. Think of it like the trim level on a Mazda 3 hatchback. Mazda already has the Mazda 3 Sport in their lineup, then later they release the Mazda 3 Turbo which is much faster. When they release this new version of the vehicle its not called the Mazda 4... that would be an entirely different vehicle based on a new platform and powertrain etc (if it existed). The new vehicle is just a new trim level / visual refresh of the existing Mazda 3.

That's why Google names it like this, but I agree its dumb. Semver would be easier.


I’d say it’s more like naming your Operating System off of the kernel version number.


Gonna steal this to help explain to non tech friends when it comes up again.


Maybe they’re signalling it’s more of a bug fix?


2.5.1 then .

semantic versioning works for most scenarios.


Would that automatically roll over anyone pinging 2.5 via their API?


If you want role over then you could specify ^2.5.0 or 2.5.x if you want to pin then it would be 2.5.0

This is all solved for a long time now , llm vendors seems to have unlearnt versioning principles.

This is fairly typical - marketing and business wants different things to do with version number than what version number systems are good at .


I suspect Google doesn't want to have to maintain multiple sub-versions. It's easier to serve one 2x popular model than two models where there's flux between the load on each, since these things have a non-trivial time to load into GPU/TPU memory for serving.


Even if switching quickly was a challenge[1], they are using these models in their own products not just selling them in a service, the first party applications could quite easily adapt to this by switching quickly to the available model and freeing up the in-demand one.

This is the entire premise behind the cloud, the reason it was Amazon did it first, they had the largest workloads at the time before Web 2.0 and SaaS was a thing.

Only businesses with large first party apps succeeded in the cloud provider space, companies like HP, IBM all failed and their time to failure strongly correlated to their amount of first party apps they operated. i.e. These apps anyway needed to keep a lot of idle capacity for peak demand capacity they could now monetize and co-mingle in the cloud.

LLMs as a service is not any different from S3 launched 20 years ago.

---

[1] It isn't, at the scale they are operating these models it shouldn't matter at all, it is not individual GPUs or machines that make a difference in load handling at all. Only few users are going to explicitly pining a specific patch version for the rest they can serve either one that is available immediately or cheaply.


That would be even more confusing because then it is unclear whether 2.6 Flash is better than 2.5 Pro.


Is a 2024 Mac boo pro better than a 2025 Mac book?


Good question


Reminds me of Dragon Drop... https://www.youtube.com/watch?v=DCu1G2rxj5c


Have they fixed the ability to easily transfer your existing Android data to the new Android phone? I find that every time I upgrade, despite choosing the options to transfer apps/settings, that 90% of the apps I open just greet me with the login screen and I have to set everything up completely from scratch. I remember maybe a handful of apps, I think one was Uber, that were able to transfer everything including the login session. That was truly magic. That's how it should be for all apps. I understand banks might have special security requirements and I already know for Google Wallet, your cards need to be reactivated even if they transfer over, but most apps are not banks.


Blame the app developers, not google. They specifically added a backup/restore mode for device to device transfer, that bypasses backup blacklists[1]. However apps can still opt out by registering a backup agent, and returning no data.

[1] https://developer.android.com/identity/data/testingbackup


Google actively avoided providing a local, secure, and seamless backup or even an interface for 3rd party backup services to make users more dependent on Google cloud services. Of course many app developers decided the Google cloud is too insecure, being not end-to-end encrypted. And Google enables them by not giving the users ways to override those stupid decisions. This wouldn't have happened on PCs, where you can mostly just copy over the application's user directory.


>Of course many app developers decided the Google cloud is too insecure, being not end-to-end encrypted

But so far as I can tell D2D transfers don't hit the cloud?

>For a D2D transfer, the Backup Manager Service queries your app for backup data and passes it directly to the Backup Manager Service on the new device, which loads it in to your app.

https://developer.android.com/identity/data/testingbackup

If your app is opting out of backup by implementing a custom backup agent that returns no data, it's pretty clear you're against user backups, period.


Pixel to Pixel has been smooth for me since the Pixel 4. Haven't don't cross manufacturer for a while.


i don't think they're ever gonna fix that


For the sake of understanding, can you explain why putting CloudFront in front of the buckets helps?


Cloudfront allows you to map your S3 with both

- signed url's in case you want a session base files download

- default public files, for e.g. a static site.

You can also map a domain (sub-domain) to Cloudfront with a CNAME record and serve the files via your own domain.

Cloudfront distributions are also CDN based. This way you serve files local to the users location, thus increasing the speed of your site.

For lower to mid range traffic, cloudfront with s3 is cheaper as the network cost of cloudfront is cheaper. But for large network traffic, cloudfront cost can balloon very fast. But in those scenarios S3 costs are prohibitive too!


When Google said GCP is "down", did it affect entire availability zones within a region? For people who designed redundant infrastructure, did your backup AZs/regions keep your systems online?


The outage was global. For my team specifically, a global Identity and Access Management outage meant that our internal service accounts could not refresh their short-lived access tokens and so different parts of our infrastructure began to fail over the course of an hour or so, regardless of what region or zone they were in. Services were up, but they could not access critical GCP services because of auth-related issues which resulted in internal service errors for us.

To give an example, our web servers connect to our GCP CloudSQL database via a Cloud SQL Auth Proxy (there's also a connection pooler in between but that also stayed up). The connection to the proxy was always available, but the proxy wasn't able to renew auth tokens it uses to tunnel to the database, regardless of where the webserver or database happened to be located. To mitigate this in the future we're planning to stop using the auth proxy and connect directly via mutual TLS but now it means we have to manage TLS certificates.


so much for System Design interview and bs gatekeeping...


If the requirement is to check uniqueness, what assumptions could possibly cause a bug? In this case, why does it matter if the uniqueness is tested with a nested for loop or with a map? There are many identical ways to check uniqueness, some being faster than others.


[flagged]


Why are you making a new account for each comment? You seem to be deliberately avoiding HN's moderation system


[flagged]


> I don’t want something gathering all my thoughts historically together and tying it to something else; nothing good comes from that; I’m not writing a serial novel.

Yeah but you should want your thoughts on a single post to tie together.

> Many years ago I had a user with thousands of karma points. I used to get really annoyed with other users downvoting my valid and thoughtful comments because it affected my karma. Despite attempts to rally the community around getting rid of downvoting, that never happened.

Sorry you had that reaction. While I get annoyed by downvotes sometimes, I've never cared about losing some points from the mostly useless pile.


[flagged]


You don't have to enter any e-mail address to get an HN account. You login from a (Firefox) incognito window and get your cookies deleted the moment the window is closed.

Why you're so afraid to let your ideas and views collect under a single account? Are they that controversial or are you weary of your own thoughts and don't want to see them again, or are you afraid to own your views as yours?

We're talking (mostly) tech here, and nobody is forced to comment.


[flagged]


This is why I said "We are discussing (mostly) tech here". I don't agree that creating a throwaway for every comment is "superior". It's basically spamming and it's even noted in the guidelines.

Nazi Germany & Jews issue is different. There's an aspect of forcing, and this is unethical and wrong on so many levels, and I'll just leave the subject here.

OTOH, from my perspective if you're afraid that you're writing a sensitive comment, you can create a throwaway. That's justifiable IMHO, but creating three accounts to discuss maps vs. loops, now that's different.

If we're talking about being ridiculous, bringing up Nazi Germany vs. Jews issue to a technical discussion is more ridiculous than the alleged ridiculousness of me asking the OP about their fears. To close, my questions was not to belittle or shame the OP, they were genuine. I'm not that person who jabs for giggles.


I don't know why you think tech comments are safe. Discussion any topic has a ton of side channel information even if you somehow believe tech is totally safe topic.


You assume the homogeneity and conformity of your thinking will save you. There were plenty of Germans that did that also. Germany lost the war.

It’s fine to be fearless. But don’t persecute someone for trying to protect themselves.


You assume that people who say the same thing, think the same way. Your assumptions about me lost you the argument.

As I said, I asked genuine questions. They might be blunt and unpopular questions, but they are questions, and it's totally OP's decision to answer me or not.

I respect them in every case.


phpMyAdmin was (is?) such a great tool and really got me into SQL/MySQL over a decade ago. Not to mention the whole PHP stack was so fun to use and let you iterate quickly and just build stuff with an immediate feedback loop - just reload the page and your updated server-side code is executed.


It’s still there and still very good. Our non profits web host has an install and it’s pretty good at browsing/ searching / dumping the tables.

Dbeaver is my stand alone app of choice. It’s Java based but has some nice features as well.


It was the first tool I ever used when I was learning about how to manage a private lineage 2 server in like 2005. Had a MySQL backend and the tutorials I was following had me using phpMyAdmin. I remember fixing a performance problem by changing a setting that (I had no idea what the implications were at the time) got rid of durability entirely on a number of tables, and months later I had to restore from an earlier backup with data loss because of data corruption.

Your right though, the interface really wasn't too terrible. Was definitely better than what was available for Postgres even a few years later when I first had contact with that.


I'm surprised that we've had a number of new languages and advancements in the past ten years or so, but none that tried to fill the same space as PHP & co did... unless I missed it.

But then, I suppose PHP itself is good enough and the people using it never felt a need for anything new. Laravel solved the lack of application structure / design handrails, and Facebook solved or worked on the bigger issues around the language - typing and runtime performance.


Symfony is there as well, a framework fit for large enterprise applications, and it works surprisingly well.

Despite that, php has its limits, and it's important to know them, and the workarounds for most of them. What is also tricky is a total lack of support from Azure for example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: