> Voice encryption/scramble on Amateur-Band's is not allowed, everything else is ok.
It seems like you're saying voice encryption is not permitted, but data encryption is? This is not true in the US. Any encoding used for the purpose of "obscuring meaning" is not permitted on amateur frequencies. Even using code phrases like "the eagle has landed" is arguably not allowed. There are some narrow exceptions for things like satellite control codes, but nothing that applies to hobby mesh nets.
> No amateur station shall transmit: [...] messages encoded for the purpose of obscuring their meaning, except as otherwise provided herein; obscene or indecent words or language; or false or deceptive messages, signals or identification.
No, numbers stations are not permitted on amateur frequencies in the US. There are some notable cases of foreign governments setting these up and interfering with amateur allocations [1], but there's not much the FCC can do about that.
It sounds like GPS, and thus a GPS-based stratum 1 server, uses these time servers, but they were successfully failed over:
> Jeff finished off the email mentioning the US GPS system failed over successfully to the WWV-Ft. Collins campus. So again, for almost everyone, there was zero issue, and the redundancy designed into the system worked like it's supposed to.
So failures in these systems are potentially correlated.
The author mentions another solution. Apparently he runs his own atomic clock. I didn’t know this was a thing an individual could do.
> But even with multiple time sources, some places need more. I have two Rubidium atomic clocks in my studio, including the one inside a fancy GPS Disciplined Oscillator (GPSDO). That's good for holdover. Even if someone were jamming my signal, or my GPS antenna broke, I could keep my time accurate to nanoseconds for a while, and milliseconds for months. That'd be good enough for me.
The CSACs that I have in a couple devices are 'atomic', and use Rubidium, but they're a bit lower accuracy than Cesium clocks [1] or Hydrogen Masers [2].
There are a few folks on the time-nuts mailing list who own such exotic pieces of hardware, but those are pretty far out of reach for most!
Atomic clocks cover a pretty big range of performance nowadays. You can pick up a used but serviceable rubidium frequency reference for a few hundred dollars but the difference between it and the top of the line clocks is almost as big as the difference between a it and a good pendulum clock.
I self host an Immich [1] instance to backup photos on my iPhone. It’s OSS and has a level of polish I’ve rarely seen in free software. Really, it’s shockingly good. The iOS app whisks my photo off to my home server several times per day.
What I’m not sure about is how to backup things like iMessages, Notes, and my Contacts. Every time I’ve looked, it appears the only options are random GitHub scripts that have reverse engineered the iMessage database.
The imessage db is literally just a sqlite db. If you have a Mac you can read the entire thing with an applescript. It’s really easy from what I remember from years ago
I use Nextcloud for files/contacts/calendar/etc. as well, but for photos I use PhotopPrism [1].
The reason is simple: photos require much more processing and focus on performance. In addition, photos take up much more space, so while my Nextcloud instance runs on an SSD, the photos reside on an HDD, mostly in sleep mode.
Yeah, I had that case in mind actually. It's a perfect illustration of why compression artifacts should be obvious and not just realistic-looking hallucinations.
I don't understand why people downvote questions like this rather than just answer the question. It's a perfectly reasonable question imo given that it's not clear how this feature is being disabled. It appears that most of this is based on reddit speculation and the OEMs don't provide a definitive answer.
Meta: recently it seems like the community has been way too loose with the downvote button, but I'm not sure if I'm just noticing it more because it's getting on my nerves, or if there has actually been a change in behavior.
There has been a change in behavior in the past few years, in fact it used to be that you could only become a HN member that can comment thus vote by posting a select number of threads before being able to comment. This actually kept the community on the more intelligent, factual, and serious side. Now it's not so serious.
This used to be the only place that I could visit to get away from Reddit behavior. It seems like the more obscure a social gathering is, the less Eternal September it suffers.
> Meta: recently it seems like the community has been way too loose with the downvote button, but I'm not sure if I'm just noticing it more because it's getting on my nerves, or if there has actually been a change in behavior.
The term "orange reddit" feels more and more like reality as time goes on.
> This allows for some interesting new deployment models for DuckDB, for example, we could now put an encrypted DuckDB database file on a Content Delivery Network (CDN). A fleet of DuckDB instances could attach to this file read-only using the decryption key. This elegantly allows efficient distribution of private background data in a similar way like encrypted Parquet files, but of course with many more features like multi-table storage. When using DuckDB with encrypted storage, we can also simplify threat modeling when – for example – using DuckDB on cloud providers. While in the past access to DuckDB storage would have been enough to leak data, we can now relax paranoia regarding storage a little, especially since temporary files and WAL are also encrypted.
Seagate has a proprietary version of SMART called FARM. It’s supposed to be more tamper resistant than SMART, but it appears the fraudsters have figured out how to manipulate it too [1].
The best you can do is check FARM if available and perform a long burn-in with something like badblocks. Then compare the SMART data before and after the burn in. Checking the serial number against the manufacturers database if available is also a good precaution.
These are probably things you should be doing whether or not the drive is allegedly new.
Scrolling through the comments reading about all the adblockers that folks recommend makes my head spin. Why exactly should I trust any of these to have full access to my browser? Looking through the app store I see so many that are clearly trying to impersonate the well known ones by using similar names. It sounds like uBlock Origin Lite is trusted by many, but watch out for Ublock and 1Block, which are also top App Store results. Going off memory, the the chrome store is even worse. The whole situation is extremely sketchy. This is not even to mention supply chain attacks which could hijack even honest projects.
Personally I’ve settled on blocking at the DNS level with unbound and a blocklist. It’s not perfect but it limits the blast radius.
>Why exactly should I trust any of these to have full access to my browser?
Content blockers on iOS don't have "full access". Most adblocking apps provide both a content blocker and an extension, the latter of which is used to work around stuff that content blockers can't block, or bugs that result as of blocking scripts from loading, but they're not needed. You can get 95% of the functionality by just using content blockers.
I took a second look at ad blockers on the app store, and many report that they collect various bits of data. Are you saying that there's a special content blocker component to all of these that can't collect data because they're isolated by iOS? I'm not sure how anyone who isn't a iOS developer is supposed to navigate this. To uBlock's credit, their App Store page reports that they collect no data, but is this enforced by iOS? Or just a checkbox that the developer clicked?
>I took a second look at ad blockers on the app store, and many report that they collect various bits of data.
Because the "app privacy" disclosures that apple only contains broad categories about what data the app can possibly collect. If the app collects analytics in the UI itself (ie. the part where you select filters or whatever), it has to say the app collects analytics. It's not possible to say "we only collect analytics on your usage of the app, not what your browsing history is".
>Are you saying that there's a special content blocker component to all of these that can't collect data because they're isolated by iOS?
There are dubious results for "uBlock" as well on browser extension stores. If it's not breaking rules (copyright violation, malware) it's precarious for companies to take action. It's obvious to me that uBlock Origin is the "correct" result, but how would a company determine that at scale?
The app was removed a day after your article was posted. The app name, developer, icon, and images are all different. It's absolutely a problem, but it was addressed.
If Apple aggressively took action against this with a high error rate, the headlines would probably be about anti-competition, censorship, and upset developers.
> but how would a company determine that at scale?
Two-way signature validation. Apple distributes unique developer IDs; make the dev sign the app locally before uploading it, like Google does for the Play Store. If those trojan horses still make it through Apple's manual inspection process, then they need to fire everyone working for the App Store and replace them with AI.
> If Apple aggressively took action against this with a high error rate
They need to take action. Apple's entire argument for an App Store monopoly is that they curate apps individually before they're uploaded to ensure a baseline of quality. When they stop vetting apps and allow the App Store to become like every other store, their argument in favor of monopoly control evaporates.
So yes, it would be anti-competitive censorship, but that's nothing Apple hasn't done before. The real issue is that their "premium" store interface is getting shown-up by the Google Play services. At the going rate there won't be anti-competitive behavior to complain about since Apple will be forced to accept competing storefronts - and they have no one to blame but themselves.
Terminal.shop lets you order coffee over ssh, which is kind of novel and fun. I did it, and the coffee was good! This post reminded me that they've gotten enough questions about security that they've added this to their FAQ:
> is ordering via ssh secure?# you bet it is. arguably more secure than your browser. ssh incorporates encryption and authentication via a process called public key cryptography. if that doesn’t sound secure we don’t know what does. [1]
I think this is wrong though for exactly the reasons described in this post. TLS verifies that the URL matches the cert through the chain of trust, whereas SSH leaves this up to the user to do out-of-band, which of course no one does.
But then the author of this article goes on to say (emphasis mine):
> This result represents good news for both the SSL/TLS PKI camps and the SSH non-PKI camps, since SSH advocates can rejoice over the fact that the expensive PKI-based approach is no better than the SSH one, while PKI advocates can rest assured that their solution is no less secure than the SSH one.
Which feels like it comes out of left field. Certainly the chain of trust adds some security, even if it's imperfect. I know many people just click through the warning, but I certainly don't.
>TLS verifies that the URL matches cert through the chain of trust,
I think you need to point out that TLS utilizes the browsers cert store for that chain of trust. If a bad actor acquires an entity that has a trusted cert, or your cert store is compromised, that embedded cert store is almost entirely useless which has happened on more than one occassion (Chinese government and Symantec most recently).
This is typically caught pretty quickly but there's almost nothing a user can do to defend against a chain of trust attack. With SSH, while nobody does it, at least you have the ability to protect yourself.
in SSH, it's a two-way handshake, the client ordering the coffee also gets a cert to prove their identity.
In browser land, the client browser doesn't get a cert to prove their identity, it's one-way only.
Certainly TLS supports client certs, browsers(at least some) technically even implement a version, but the UX is SOOOO horrible that nobody uses it. Some people have tried, the only people that have ever seen any success with client side authentication certificates over a web browser are webauthn/passkeys and the US Military(their ID cards have a cert in them).
webauthn/passkeys are not fully baked yet, so time will tell if they will actually be a success, but so far their usage is growing.
I think webauthn/passkeys will be more successful (frankly I think they already have been) because they're not part of TLS. The problem with client certs, and other TLS client auth like TLS-SRP, is that it inherently operates at a different layer than the site itself. This cross-cutting through layers greatly complicates getting the UX right, not just on the browser side (1) but also on the server side (2). Whereas, webauthn is entirely in the application layer, though of course there's also some supporting browser machinery.
(1) = Most browsers defer to the operating system for TLS support, meaning there's not just a layer boundary but a (major) organizational one. A lot of the relevant standards are also stuck in the 1990s and/or focused on narrow uses like the aforementioned U.S. military and so they ossified.
(2) = The granularity of TLS configuration in web servers varies widely among server software and TLS libraries. Requesting client credentials only when needed meant tight, brittle coupling between backend applications and their load balancer configuration, which was also tricky to secure properly.
So true, two-way certs with TLS have crappy implementations everywhere, not just in the browser.
I have 2 problems with webauthn/passkeys:
* You MUST run Javascript, meaning you are executing random code in the browser, which is arguably unsafe. You can do things to make it safer, most of these things nobody does(never run 3rd party code, Subresource Integrity, etc).
* The implementations throughout the stack are not robust. Troubleshooting webauthn/passkey issues is an exercise in wasted time. About the only useful troubleshooting step you can do is delete the user passkey(s) and have them try again, and hope whatever broke doesn't break again.
It seems like you're saying voice encryption is not permitted, but data encryption is? This is not true in the US. Any encoding used for the purpose of "obscuring meaning" is not permitted on amateur frequencies. Even using code phrases like "the eagle has landed" is arguably not allowed. There are some narrow exceptions for things like satellite control codes, but nothing that applies to hobby mesh nets.
Here is the relevant Part 97 rule: https://www.ecfr.gov/current/title-47/part-97#p-97.113(a)(4)
> No amateur station shall transmit: [...] messages encoded for the purpose of obscuring their meaning, except as otherwise provided herein; obscene or indecent words or language; or false or deceptive messages, signals or identification.
reply