> if companies like Apple really wanted to fund this work, I'm pretty sure they could figure something out
A reminder that companies are not a hive mind.
Many people at Apple surely would love to funnel piles of money to open source. Maybe some of them even work in the Finance or Procurement or Legal departments. But the overwhelming majority of Apple’s procurement flow is not donations, and so it is optimized for the shape of the work it encounters.
I bet there are plenty of people working at Chick-fil-A who wish it was open on Sundays. But it’s not ~“blaming the user” to suggest that as it stands, showing up on Sunday is an ineffective way to get chicken nuggets.
The idea that donations are the only way they could fund this work is what I was talking about. I'm sure Apple has various contractors and other forms of employees.
It's like suggesting that Chic-Fil-A really does want to open on Sunday, but the only thing stopping them is customers not telling them they want it open on Sunday.
Yea, here “we’re happy to pay for it” really means “we’re not happy to pay the price you’re charging, but maybe we’d pay if you fundamentally changed your prices or pricing model.”
Where are you seeing that? From what I can tell, the 10k message limit applies to "Mattermost Entry":
> Mattermost Entry gives small, forward-leaning teams a free self-hosted Intelligent Mission Environment to get started on improving their mission-critical secure collaborative workflows. Entry has all features of Enterprise Advanced with the following server-wide limitations and omissions:
What the fuck is this lmao? "a free self-hosted Intelligent Mission Environment to get started on improving their mission-critical secure collaborative workflows".
Sounds like some kind of parody of enterprise software.
Can you give an example of an email provider or technology that’s doing GPG or SMIME at the gateway? I’ve never seen that configuration and it doesn’t seem like it would make sense.
Either it’s just theatre, encrypting emails internally and then stripping it when they’re delivered, or you still need every recipient to be managing their own keys anyways to be able to decrypt/validate what they’re reading.
I will not name it, but I worked on such product for some time. In fact it is still being sold, maybe 3rd decade already.
> you still need every recipient to be managing their own keys anyways to be able to decrypt/validate what they’re reading.
Nope, that is handled at the gateway on the receiving side.
edit: Again, the major point here is to ensure no plain text email gets relayed. TLS does not guarantee that plain text email doesn't get relayed by a wrongly configured relay on its route.
> Use Signal. Or Wire, or WhatsApp, or some other Signal-protocol-based secure messenger.
That's a "great" idea considering the recent legal developments in the EU, which OpenPGP, as bad as it is, doesn't suffer from. It would be great if the author updated his advice into something more future-proof.
There's no future-proof suggestion that's immune to the government declaring it a crime.
If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
> If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
I want secure messaging, not encrypted SMS.
I want my messages to sync properly between arbitrary number of devices.
I want my messaging history to not be lost when I lose a device.
I want not losing my messaging history to not be a paid feature.
I want to not depend on a shady crypto company to send a message.
I seriously don't care what messenger you use, as long as it isn't email, which can't be made secure. Pick something open source. It'll be less secure than Signal, but way more secure than email.
Then your next best bet is Matrix.org. Not to the same security standard as Signal, but if you don't have a specific threat against you then it's fine.
Pros of Matrix: it actually has a consistent history (in theory); no vendor lock-in.
Cons of Matrix: encryption breaks constantly. Right now I’m stuck in a fun loop of endlessly changing recovery keys: https://github.com/element-hq/element-web/issues/31392
I’m facing it on Element Desktop, but I’ll try to reproduce it on Element Web. I’ve tried to submit logs from Element Desktop, but it says that `/rageshake` (which I was told to do) is not a command. I’m happy to help with debugging this, but I’m not sure how to submit logs from Desktop.
Something like this happens basically every time I try to use Matrix though. Messages are not decrypting, or not being delivered, or devices can’t be authenticated for some cryptic reason. The reason I even tried to use Element Desktop is because my nheko is seemingly now incapable of sending direct messages (the recepient just gets infinite “waiting for message”).
Weird. Encryption these days (in Element Web/Desktop and Element X at least) should be pretty robust - although this whole identity reset thing is a known bug on Element Web/Desktop. You can submit debug logs from Settings: Help & About: Submit Debug Logs, and hopefully that might give a hint on what's going wrong.
"If you do decide to opt in to secure backups, you’ll be able to securely back up all of your text messages and the last 45 days’ worth of media for free."
If you have a metric fuckton of messages, that does cost money, sure, but as they say:
"If you want to back up your media history beyond 45 days, as well as your message history, we also offer a paid subscription plan for US$1.99 per month."
"This is the first time we’ve offered a paid feature. The reason we’re doing this is simple: media requires a lot of storage, and storing and transferring large amounts of data is expensive. As a nonprofit that refuses to collect or sell your data, Signal needs to cover those costs differently than other tech organizations that offer similar products but support themselves by selling ads and monetizing data."
If you want Signal to host the encrypted storage, that costs money. If you don't want to pay Signal money, they provide 45 days of backup for free.
If you want to self-host your own backups (at your own cost), that's easy to do.
> You don't have to use it like "encrypted SMS"! You're free.
Using it as something more than encrypted SMS requires persistent message history between devices.
> metric fuckton of messages
“More than 45 days” is a metric fuckton? Seriously?
> If you want Signal to host the encrypted storage, that costs money. If you don't want to pay Signal money, they provide 45 days of backup for free.
I don’t want Signal to store my messages. I want Signal to not lock in my messages on their servers, so I can sync them between my devices and back them up into my own backups.
> If you want to self-host your own backups (at your own cost), that's easy to do.
Except there’s no way to move it between platforms. I have more than one device.
> Are you referring to MobileCoin? That feature isn't in the pipeline for sending messages.
I don’t want shady crypto company to hold my data hostage, and there’s no way to store it on my hardware and then move it between platforms. That’s my problem with signal.
> A Synchronized Start for Linked Devices
It only properly transfers 45 days. You can’t have more than one phone. Phones are special “primary devices” and AFAIK you can’t restore your messages if you lose your phone even if you have logged-in Signal Desktop.
Yes, if your only device is a single Android phone you can do that. You can’t, however, use that backup to populate your message history on other platforms.
I’ve already lost message history consistency because one of my devices was offline for too long. The messages are there on my other device, but Signal refuses to let me copy my data from one of my devices to another. Signal is, quite literally, worse at syncing message history than IRC — at least with IRC I can set up a bouncer and have a consistent view of history on all of my devices, but there’re no Signal bouncers.
Look, if defending "message history consistency" is a reason you're choosing some other secure messenger rather than Signal, then I don't think this argument is very productive; use some other secure messenger then. But if "message history consistency" is a reason you're endorsing encrypted email over Signal, you're committing malpractice.
The point is that whatever secure messenger you use, it must plausibly be secure. Email cannot plausibly be made secure. Whatever other benefits you might get from using it --- federation, open source, UX improvements, universality --- come at the cost of grave security flaws.
Most people who use encrypted email are doing so in part because it does not matter if any of their messages are decrypted. They simply aren't interesting or valuable. But in endorsing a secure messenger of any sort, you're influencing the decisions of people whose messages are extremely sensitive, even life-or-death sensitive. For those people, federation or cross-platform support can't trump security, and as practitioners we are obligated to be clear about that.
I’m definitely not “commiting malpractice” on account of not being a security practicioner. I’m talking from a perspective of a user.
It’s important to me — as a user — that a communication tool doesn’t lose my data, and Signal already did. Actual practicioners keep recommending Signal and sure, I believe that in a weird scenario where my encryption keys are somehow compromised without also compromising my local message history, Signal’s double-ratchet will do wonders — but it doesn’t actually work as a serious communication tool.
It’s also kinda curious that while the “email cannot be made secure” mantra is constantly repeated online, basically every organization that needs secure communication uses email. Openwall are certainly practicioners, and they use PGP-over-email: are they commiting malpractice?
Very few organizations need security from state level or similar threats and the infrastructure provider. Most organizations that want secure email don't use any kind of e2ee at all, they just trust Google or Microsoft or whomever.
The few jobs that actually care about this stuff, like journalists, do use signal.
Openwall doesn't get security via pgp, it gets a spam filter.
> but it doesn’t actually work as a serious communication tool.
Say more. Plenty of people use Signal as a serious communication tool.
> Openwall are certainly practicioners, and they use PGP-over-email: are they commiting malpractice?
They, and other communities that use GPG-encrypted emails are LARPing, and it’s only fine because their emails don’t actually matter enough for anybody to care about compromising them.
It’s not malpractice to LARP: plenty of people love getting out their physical or digital toys and playing pretend. But if you’re telling other people that your foam shield can protect them from real threats, you are lying.
> Say more. Plenty of people use Signal as a serious communication tool.
I did say more already. Maybe you believe in serious communication tools that can’t synchronize searchable history between devices, but I don’t.
> They, and other communities that use GPG-encrypted emails are LARPing, and it’s only fine because their emails don’t actually matter enough for anybody to care about compromising them.
Are we talking about the same Openwall? Are you aware what Openwall’s oss-security mailing list is? Please, do elaborate how nobody cares about getting access to an unlimited stream of zerodays for basically every Unix-like system.
At this point you're just repeating the argument you made upthread without responding to any of its rebuttals. That's fine; I too am comfortable with the arguments on this thread as they stand. Let's save each other some time and call it here.
I’m very familiar with oss-security, a public mailing list that doesn’t really have anything to do with GPG-encrypted emails. Encrypting emails to a public mailing list, with GPG or otherwise, wouldn’t really make sense.
> Only use these lists to report security issues that are not yet public
> To report a non-public medium or high severity 2) security issue to one of these lists, send e-mail to distros [at] vs [dot] openwall [dot] org or linux [dash] distros [at] vs [dot] openwall [dot] org (choose one of these lists depending on who you want to inform), preferably PGP-encrypted to the key below.
Yes, that would be an example of LARPing security. The obviously indicator is that encrypting your message is entirely optional, per their own instructions. The less obvious bit is that even if you encrypt your message, anyone without GPG configured who replies has stripped any attempt at encryption from the contents.
Nobody decided that it's a crime, and it's unlikely to happen. Question is, what do you do with mandatory snooping of centralized proprietary services that renders them functionally useless aside from "just live with it". I was hoping for actual advice rather than a snarky non-response, yet here we are.
You're asking for a technical solution to a political problem.
The answer is not to live with it, but become politically active to try to support your principles. No software can save you from an authoritarian government - you can let that fantasy die.
I gave you the answer that exists: I'm not aware of any existing or likely-to-exist secure messaging solution that would be a viable recommendation.
The available open-source options come nowhere close to the messaging security that Signal/Whatsapp provide. So you're left with either "find a way to access Signal after they pull out of whatever region has criminalized them operating with a backdoor on comms" or "pick any option that doesn't actually have strong messaging security".
Not the GP, but most of us want to communicate with other people, which means SMS or WhatsApp. No point have perfect one-time-pad encryption and no one to share pads with.
Could you please link the source code for the WhatsApp client, so that we can see the cryptographic keys aren't being stored and later uploaded to Meta's servers, completely defeating the entire point of Signal's E2EE implementation and ratchet protocol?
This may shock you, but plenty of cutting-edge application security analysis doesn't start with source code.
There are many reasons, but one of them is that for the overwhelming majority of humans on the planet, their apps aren't being compiled from source on their device. So since you have to account for the fact that the app in the App Store may not be what's in some git repo, you may as well just start with the compiled/distributed app.
Whether or not other people build from source code has zero relevance to a discussion about the trustworthiness of security promises coming from former PRISM data providers about the closed-source software they distribute. Source availability isn't theater, even when most people never read it, let alone build from it. The existence of surreptitious backdoors and dynamic analysis isn't a knock against source availability.
Signal and WhatsApp do not belong in the same sentence together. One's open source software developed and distributed by a nonprofit foundation with a lengthy history of preserving and advancing accessible, trustworthy, verifiable encrypted calling and messaging going back to TextSecure and RedPhone, the other's a piece of proprietary software developed and distributed by a for-profit corporation whose entire business model is bulk harvesting of user data, with a lengthy history of misleading and manipulating their own users and distributing user data (including message contents) to shady data brokers and intelligence agencies.
To imply these two offer even a semblance of equivalent privacy expectations is misguided, to put it generously.
These are words, but I don't understand how they respond to the preceding comment, which observes that binary legibility is an operational requirement for real security given that almost nobody uses reproducible builds. In reality, people meaningfully depend on work done at the binary level to ensure lack of backdoors, not on work done at the source level.
The preceding comment is saying that source security is insufficient, not that transparency is irrelevant.
Source availability is what makes a chain of trust possible that simply isn't meaningfully possible with closed source software, even with dynamic analysis, decompilation, reverse engineering, runtime network analysis with TLS decryption, etc.
Both you and the preceding commenter are correct that just running binaries signed and distributed by Alphabet (Google) and/or Apple presents room for additional risks beyond those observable in the source code, but the solution to this problem isn't to say "and therefore source availability doesn't matter at all for anyone", it's to choose to build from source or to obtain and install APKs built and signed by the developers, such as via Accrescent or Obtanium (pulls directly from github, gitlab, etc releases).
There's a known-good path. Most people do not take the known-good path. Their choice to do so does not invalidate or eliminate the desirable properties of known-good path (verifiability, trustworthiness).
I genuinely do not understand the argument you and the other user are making. It reads to me like an argument that goes "Yes, there's a known, accurate, and publicly documented recipe to produce a cure for cancer, but it requires prerequisite knowledge to understand that most people lack, and it's burdensome to follow the recipe, so most people just buy their vials from the untrustworthy CancerCureCorporation, who has the ability to give customers a modified formula that keeps them sick rather than giving them the actual cure, and almost nobody makes the cure themselves without going through this untrustworthy but ultimately optional intermediary, so the public documentation of the cure doesn't matter at all, and there's no discernable difference between having the cure recipe and not having the cure recipe."
No, you're completely off the rails from the first sentence. It is absolutely possible --- in some ways more possible[†] --- to make a chain of trust without source availability. Your premise is that "reverse engineering" is somehow incomplete or lossy with respect to uncovering software behavior, and that simply isn't true.
[†] Source is always good to have, but it's insufficient.
Never once anywhere in this thread have I claimed that source code alone is sufficient by itself to establish a chain of trust, merely that it is a necessary prerequisite to establish a chain of trust.
That said, you seem to be refuting even that idea. While your reputation precedes you, and while I haven't been in the field quite as long as you, I do have a few dozen CVEs, I've written surreptitious side channel backdoors and broken production cryptographic schemes in closed-source software doing binary analysis as part of a red team alongside former NCC folks. I don't know a single one of them who would say that lacking access to source code increases your ability to establish a chain of trust.
Can you please explain how lacking access to source code, being ONLY able to perform dynamic analysis, rather than dynamic analysis AND source code analysis, can ever possibly lead to an increase in the maximum possible confidence in the behavior of a given binary? That sounds like a completely absurd claim to me.
I see what's happening. You're working under the misapprehension that static analysis is only possible with source code. That's not true. In fact: a great deal of real-world vulnerability research is performed statically in a binary setting.
There's a lot of background material I'd have to bring in to attempt to bring you up to speed here, but my favorite simple citation here is just: Google [binary lifter].
This assumption about me is not accurate at all, I've done static analysis professionally on CIL, on compiled bytecode, and on source code. Instead of being condescending and patronizing to someone you don't know that you've made factually inaccurate assumptions about, can you please explain how having just a binary and no access to source code gives you more information about, greater confidence in, and a stronger basis for trust in the behavior of a binary than having access to the binary AND the source code used to build it?
I have no idea who you are and can only work from what you write here, and with this comment, what you've written no longer makes sense. The binary (or the lifted IR form of the binary or the control flow graph of the binary or whatever form you're evaluating) is the source of truth about what a program actually does, not the source code.
The source code is just a set of hints about what the binary does. You don't need the hints to discern what a binary is doing.
I'm not refuting that the binary is the source of truth about behavior, I never stated it wasn't, and I don't know where you even got the idea that I wasn't. It's been very frustrating to have to repeatedly do this - you and akerl_ have both been attacking strawman positions I do not hold and never stated, and being condescending and patronizing in the process. Is it possible you're making assumptions about me based on arguments made by other people that sound similar to the ones I'm making? I'd really appreciate not having to keep reminding you that I've never made the claims you're implying I'm making, if that's not too much to ask of you.
At a high level, what I'm fundamentally contending is that WhatsApp is less trustworthy and secure than Signal. I can have a higher degree of confidence in the behavior and trustworthiness of the Signal APK I build from source myself than I can from WhatsApp, which I can't even build a binary of myself. I'd simply be given a copy of it from Google Play or Apple's App Store.
Signal's source code exhibits known trustworthy behavior, i.e. not logging both long-term and ephemeral cryptographic keys and shipping them off to someone else's servers. Sure, Google Play and Apple can modify this source code, add a backdoor, and the binary distributed by Google Play and Apple can have behavior that doesn't match the behavior of the published source code. You can detect this fairly easily, because you have a point of reference to compare to. You know what the compiled bytecode from the source code you've reviewed looks like, because you can build it yourself, no trust required[1], it's not difficult to see when that differs in another build.
With WhatsApp, you don't even have a point of reference of known good behavior, i.e. not logging both long-term and ephemeral cryptographic keys and shipping them off to someone else's server, in the first place. You can monitor all the disk writes, you can monitor all the network activity. Just because YOU don't observe cryptographic keys being logged, either in-memory, or on disk, or being sent off to some other server, doesn't mean there isn't code present to perform those exact functions under conditions you've never met and never would - it's entirely technically feasible for Google and Apple to be fingerprinting a laundry list of identifiers of known security researchers and be shipping them binaries with behavior that differs from the behavior of ordinary users, or even for them to ship targeted backdoored binaries to specific users at the demand of various intelligence agencies.
The upper limit for the trustworthiness of a Signal APK you build from source yourself is on a completely different planet from the trustworthiness of a WhatsApp APK you only have the option of receiving from Google.
And again, none of this even begins to factor in Meta's extensive track record on deliberately misleading users on privacy and security through deceptive marketing and subverting users' privacy extensively. Onavo wasn't just capturing all traffic, it was literally doing MITM attacks against other companies' analytics servers with forged TLS certificates. Meta was criminally investigated for this and during discovery, it came out that executives understood what was going on, understood how wrong it was, and deliberately continued with the practice anyway. Actual technical analysis of the binaries and source code aside, it's plainly ridiculous to suggest that software made by that same corporation is as trustworthy as Signal. One of these apps is a messenger made by a company with a history of explicitly misleading users with deceptive privacy claims and employing non-trivial technical attacks against their own users to violate their own users' privacy, the other is made by a nonprofit with a track record of being arguably one of the single largest contributors to robust, accessible, audited, verifiable secure cryptography in the history of the field. I contend that suggesting these two applications are equally secure is irrational, impossible to demonstrate or verify, and indefensible.
[1] Except in your compiler, linker, etc... Ken Thompson's 'Reflections on Trusting Trust' still applies here. The argument isn't that source code availability automatically means 100% trustworthy, it means the upper boundary for trustworthiness is higher than without source availability.
It's clear we're not going to agree on the technical discussion, but I do want to reply to the claim that I've been strawmanning you.
I've been largely ignoring your sideline commentary about not trusting Meta and their other work outside of WhatsApp. Mostly because the whole thrust of my argument is that an app's security is confirmed by analyzing what the code does, not by listening to claims from the author.
Beyond that, I've been commenting in good faith about the core thrust of our disagreement, which is whether or not a lack of available source code disqualifies WhatsApp as a viable secure messaging option alongside Signal.
As part of that, I had to respond midway through because you put a statement in quotation marks that was not actually something I'd said.
Sorry, no, I'm not going to pick this apart. You wrote:
Can you please explain how lacking access to source code, being ONLY able to perform dynamic analysis, rather than dynamic analysis AND source code analysis, can ever possibly lead to an increase in the maximum possible confidence in the behavior of a given binary?
This doesn't make sense, because not having source code doesn't limit you to dynamic analysis. I assumed, 2 comments back, you were just misunderstanding SOTA reversing; you got mad at me about that. But the thing you "never stated it wasn't" is right there in the comment history. Acknowledge that and help me understand where the gap was, or this isn't worth all the words you're spending on it.
Great, then it sounds like we agree: your original equivalence of Signal and WhatsApp was misguided, since one offers a verifiable chain of trust that starts with source availability and the other doesn't, to say nothing of the lengthy history of untrustworthiness and extensive, deliberate privacy violations of the company that owns and maintains WhatsApp, right?
No, we don’t agree. There are things that source code is good for, but validating the presence or absence of illicit data stealing code in apps delivered to consumers is not one of those things. For that, source code can show you obvious malfeasance, but since it’s not enough to rule out obvious malfeasance, you’re stuck going to analysis of the compiled app in both cases.
The population of users who have a verifiable path from an open source repo to an app on their device is a rounding error in the set of humans using messaging apps.
I think we've both made our positions clear. From my perspective, you're continuing to heavily cite user statistics that are irrelevant to the properties of verifiability or trustworthiness of the applications themselves, the goalposts I am discussing keep being moved, and there is a repeated pattern of neglect to address the points I'm raising. Readers can judge for themselves. Curious readers should also read about the history of Meta's Onavo VPN software and resulting lawsuits and settlements in evaluating the credibility of Meta's privacy marketing.
Just to be crystal clear about the goalposts: I said at the start of this chain that if somebody wants secure messaging, they should use Signal or WhatsApp.
You raised concerns about lack of source availability, and I’ve been consistent in my replies that source availability is not the way that somebody wants secure messaging is going to know they’re getting it. They’re going to get it because they’re using a popular platform with robust primitives, whose compiled/distributed apps receive constant scrutiny from security researchers.
Signal and WhatsApp are that. Concerns about Meta’s other work are just noise, in part because analysis of the WhatsApp distributed binaries doesn’t rely on promises from Meta.
No, because there is no keyring and you have to supply people's public key each time. It is not suitable for large-scale public key management (with unknown recipients), and it does not support automatic discovery, trust management. Age does NOT SUPPORT signing at all either.
Would "fetch a short-lived age public key" serve your use case? If so, then an age plugin that build atop the AuxData feature in my Fediverse Public Key Directory spec might be a solution. https://github.com/fedi-e2ee/public-key-directory-specificat...
But either way, you shouldn't have long-lived public keys used for confidentiality. It's a bad design to do that.
> you shouldn't have long-lived public keys used for confidentiality.
This statement is generic and misleading. Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine AND desired.
> Would "fetch a short-lived age public key" serve your use case?
(This is some_furry, I'm currently rate-limited. I thought this warranted a reply, so I switched to this account to break past the limit for a single comment.)
> This statement is generic and misleading.
It may be generic, but it's not misleading.
> Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine.
What exactly do you mean by "long-lived"?
The "lifetime" of a key being years (for a long-lived backup) is less important than how many encryptions are performed with said key.
The thing you don't want is to encrypt 2^50 messages under the same key. Even if it's cryptographically safe to do that, any post-compromise key rotation will be a fucking nightmare.
The primary reason to use short-lived public keys is to limit the blast radius. Consider these two companies:
Alice Corp. uses the same public key for 30+ years.
Bob Ltd. uses a new public key for each quarter over the same time period.
Both parties might retain the secret key indefinitely, so that if Bob Ltd. needs to retrieve a backup from 22 years ago, they still can.
Now consider what happens if both of them lose their currently-in-use secret key due to a Heartbleed-style attack. Alice has 30 years of disaster recovery to contend with, while Bob only has up to 90 days.
Additionally, file encryption, backups, and archives typically use ephemeral symmetric keys at the bottom of the protocol. Even when a password-based key derivation function is used (and passwords are, for whatever reason, reused), the password hashing function usually has a random salt, thereby guaranteeing uniqueness.
The idea that "backups" magically mean "long-lived" keys are on the table, without nuance, is extremely misleading.
> > Would "fetch a short-lived age public key" serve your use case?
> Sadly no.
shrug Then, ultimately, there is no way to securely satisfy your use case.
You introduced "short-lived" vs "long-lived", not me. Long-lived as wall-clock time (months, years) is the default interpretation in this context.
The Alice / Bob comparison is asymmetric in a misleading way. You state Bob Ltd retains all private keys indefinitely. A Heartbleed-style attack on their key storage infrastructure still compromises 30 years of backups, not 90 days. Rotation only helps if only the current operational key is exposed, which is an optimistic threat model you did not specify.
Additionally, your symmetric key point actually supports what I said. If data is encrypted with ephemeral symmetric keys and the asymmetric key only wraps those, the long-lived asymmetric key's exposure does not enable bulk decryption without obtaining each wrapped key individually.
> "There is no way to securely satisfy your use case"
No need to be so dismissive. Personal backup encryption with a long-lived key, passphrase-protected private key, and offline storage is a legitimate threat model. Real-world systems validate this: SSH host keys, KMS master keys, and yes, even PGP, all use long-lived asymmetric keys for confidentiality in non-ephemeral contexts.
And to add to this, incidentally, age (the tool you mentioned) was designed with long-lived recipient keys as the expected use case. There is no built-in key rotation or expiry mechanism because the authors considered it unnecessary for file encryption. If long-lived keys for confidentiality were inherently problematic, age would be a flawed design (so you might want to take it up with them, too).
In any case, yeah, your point about high-fan-out keys with large blast radius is correct. That is different from "long-lived keys are bad for confidentiality" (see above with regarding to "age").
> The Alice / Bob comparison is asymmetric in a misleading way. You state Bob Ltd retains all private keys indefinitely. A Heartbleed-style attack on their key storage infrastructure still compromises 30 years of backups, not 90 days.
No. Having 30 years of secret keys at all is not the same of having 30 years of secret keys in memory.
That was just me being goofy in that bit (and only that), but I hope the rest of my message went across. :)
> In fact for file storage why not use an encrypted disk volume so you don't need to use PGP?
Different threat models. Disk encryption (LUKS, VeraCrypt, plain dm-crypt) protects against physical theft. Once mounted, everything is plaintext to any process with access. File-level encryption protects files at rest and in transit: backups to untrusted storage, sharing with specific recipients, storing on systems you do not fully control. You cannot send someone a LUKS volume to decrypt one file, and backups of a mounted encrypted volume are plaintext unless you add another layer.
>You cannot send someone a LUKS volume to decrypt one file, and backups of a mounted encrypted volume are plaintext unless you add another layer.
Veracrypt, and I'm sure others, allow you to do exactly this. You can create a disk image that lives in a file (like a .iso or .img) and mount/unmount it, share it, etc.
You can still do that with a .dmg, for example. I've done it, it works more or less like a zip.
But even if that was somehow unreasonable or undesired, you can use Filippo's age for that. PGP has no use case that isn't done better by some other tool, with the possible exception of "cosplay as a leet haxor"
We need a keyring at a company. Because there's no other media for communicating, where you reach management and technical people in companies as well.
And we have massive issues due to the fact that the ongoing-decrying of "shut everything off" and the following non-improvement-without-an-alternative because we have to talk with people of other organizations (and every organization runs their own mailserver) and the only really common way of communication is Mail.
And when everyone has a GPG Key, you get.. what? an keyring.
You could say, we do not need gpg, because we control the mailserver, but what if a mailserver is compromised and the mails are still in mailboxes?
the public keys are not that public, only known to the contenders, still, it's an issue and we have a keyring
You need a private PKI, not keyring. They're subtly different - a PKI can handle key rotation, etc.
Yes there aren't a lot of good options for that. If you're using something like a Microsoft software stack with active directory or similar identity/account management then there's usually some PKI support in there to anchor to.
Across organisations, there's really very very few good solutions. GPG specifically is much too insecure when you need to receive messages from untrusted senders. There's basically S/MIME which have comparable security issues, then we have AD federation or Matrix.org with a server per org.
> You could say, we do not need gpg, because we control the mailserver, but what if a mailserver is compromised and the mails are still in mailboxes?
How are you handling the keys? This is only true if user's protect their own keypairs with strong passwords / yubikey applet, etc.
What you described IS WHY age is the better option.
GPG's keyring handling has also been a source of exploits. It's much safer to directly specify recipient rather than rely on things like short key IDs which can be bruteforced.
Automatic discovery simply isn't secure if you don't have an associated trust anchor. You need something similar to keybase or another form of PKI to do that. GPG's key servers are dangerous.
You technically can sign with age, but otherwise there's minisign and the SSH spec signing function
> you have to supply people's public key each time
Keyrings are awful. I want to supply people’s public keys each time. I have never, in my entire time using cryptography, wanted my tool to guess or infer what key to verify with. (Heck, JOSE has a long history of bugs because it infers the key type, which is also a mistake.)
I have an actual commercial use case that receives messages (which are, awkwardly, files sent over various FTP-like protocols, sigh), decrypts and verifies them, and further processes them. This is fully automated and runs as a service. For horrible legacy reasons, the files are in PGP format. I know the public key with which they are signed (provisioned out of band) and I have the private key for decryption (again, provisioned out of band).
This would be approximately two lines of code using any sane crypto library [0], but there really isn’t an amazing GnuPG alternative that’s compatible enough.
But GnuPG has keyrings, and it really wants to use them and to find them in some home directory. And it wants to identify keys by 32-bit truncated hashes. And it wants to use Web of Trust. And it wants to support a zillion awful formats from the nineties using wildly insecure C code. All of this is actively counterproductive. Even ignoring potential implementation bugs, I have far more code to deal with key rings than actual gpg invocation for useful crypto.
[0] I should really not have to even think about the interaction between decryption and verification. Authenticated decryption should be one operation, or possibly two. But if it’s two, it’s one operation to decapsulate a session key and a second operation to perform authenticated decryption using that key.
Some years ago I wrote "just a little script" to handle encrypting password-store secrets for multiple recipients. It got quite ugly and much more verbose than planned, switching gpg output parsing to Python for sanity.
I think I used a combination of --keyring <mykeyring> --no-default-keyring.
Never would encourage anyone to do this again.
>And it wants to identify keys by 32-bit truncated hashes.
That's 64 bits these days.
>I should really not have to even think about the interaction between decryption and verification.
Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
>> And it wants to identify keys by 32-bit truncated hashes.
> That's 64 bits these days.
The fact that it’s short enough that I even need to think about whether it’s a problem is, frankly, pathetic.
> Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
I can’t quite tell what you mean.
One can build protocols that do encrypt-then-sign, encrypt-and-sign, sign-then-encrypt, or something clever that combines encryption and signing. Encrypt-then-sign has a nice security proof, the other two combinations are often somewhat catastrophically wrong, and using a high quality combination can have good performance and nice security proofs.
But all of the above should be the job of the designer of a protocol, not the user of the software. If my peer sends me a message, I should provision keys, and then I should pass those keys to my crypto library along with a message I received (and perhaps whatever session state is needed to detect replays), and my library should either (a) tell me that the message is invalid and not give me a guess as to its contents or (b) tell me it’s valid and give me the contents. I should not need to separately handle decryption and verification, and I should not even be able to do them separately even if I want to.
>The fact that it’s short enough that I even need to think about whether it’s a problem is, frankly, pathetic.
Please resist the temptation to personally attack others.
I think you mean that 64 bits of hash output could be trivially collided using, say, Pollard's rho method. But it turns out that simple collisions are not an issue for such hashes used as identities. The fact that PGP successfully used 32 bits (16 bits of effort for a collision) for so long is actually a great example of the principle.
>...I should not even be able to do them separately even if I want to.
Alas that is not possible. The problem is intrinsic to end to end encrypted messaging. Protocols like PGP combine them into a single key fingerprint so that the user does not have to deal with them separately. You still have to verify the fingerprint for people you are sending to and the fingerprint for the people who send you messages.
My threat model assumes you want an attacker advantage of less than 2^-64 after 2^64 keys exist to be fingerprinted in the first place, and your threat model includes collisions.
If I remember correctly, cloud providers assess multi-user security by assuming 2^40 users which each will have 2^50 keys throughout their service lifetime.
If you round down your assumption to 2^34 users with at most 100 public keys on average (for a total of 2^41 user-keys), you can get away with 2^-41 after 2^41 at about 123 bits, which for simplicity you can round up to the nearest byte and arrive at 128 bits.
The other thing you want to keep in mind is, how large are the keys in scope? If you have 4096-bit RSA keys and your fingerprints are only 64 bits, then by the pigeonhole principle we expect there to be 2^4032 distinct public keys with a given fingerprint. The average distance between fingerprints will be random (but you can approximate it to be an order of magnitude near 2^32).
In all honesty, fingerprints are probably a poor mechanism.
OK, to be clear, I am specifically contending that a key fingerprint does not include collisions. My proof is empirical, that no one has come up with an attack on 64 bit PGP key fingerprints.
Collisions mean that an attacker can generate two or more messaging identities with the same fingerprint. How would that help them in some way?
Why so high? Computers are fast and massively parallel these days. If a cryptosystem fully relies on fingerprints, a second preimage of someone’s fingerprint where the attacker knows the private key for the second preimage (or it’s a cleverly corrupt key pair) catastrophically breaks security for the victim. Let’s make this astronomically unlikely even in the multiple potential victim case.
And it’s not like 256 bit hashes are expensive.
(I’m not holding my breath on fully quantum attacks using Grover’s algorithm, at high throughput, against billions of users, so we can probably wait a while before 256 bits feels uncomfortably short.)
A key fingerprint is a usability feature. It has no other purpose. Otherwise we would just use the public key. Key fingerprints have to be kept as short as possible. So the question is, how short can that be? I would argue that 256 bit key fingerprints are not really usable.
Signal messenger is using 100 bits for their key fingerprint. They combine two to make a 60 digit decimal number. Increasing that to 256 x 2 bits would mean that they would end up with 154 decimal digits. That would be completely unusable.
I was asked about the minimum value, and gave my explanation for why some values could be considered the minimum. By all means, use 256-bit fingerprints.
> I think you mean that 64 bits of hash output could be trivially collided using, say, Pollard's rho method. But it turns out that simple collisions are not an issue for such hashes used as identities.
No. I mean that 64 bits can probably be inexpensively attacked to produce first or second preimages.
It would be nice if a decentralized crypto system had memorable key identifiers and remained secure, but I think that is likely to be a pipe dream. So a tool like gpg shouldn’t even try. Use at least 128 bits and give three choices: identify keys by an actual secure hash or identify them by a name the user assigns or pass them directly. Frankly I’m not sure why identifiers are even useful — see my original complaint about keyrings.
>> ...I should not even be able to do them separately even if I want to.
>Alas that is not possible. The problem is intrinsic to end to end encrypted messaging. Protocols like PGP combine them into a single key fingerprint so that the user does not have to deal with them separately.
Huh? It’s possible. It’s not even hard. It could work like this:
>I mean that 64 bits can probably be inexpensively attacked to produce first or second preimages.
Keep in mind that you would have to generate a valid keypair, or something that could be made into a valid keypair for each iteration. That fact is why PGP got along with 32 bit key IDs for so long. PGP would still be using 32 bit key IDs if it wasn't that someone figured out how to mess with RSA exponents to greatly speed up the process. Ironically, the method with the slowest keypair generation became the limiting factor.
It isn't like this is a new problem. People have been designing and using key fingerprint schemes for over a quarter of a century now.
How do you know that the recipient key actually belongs to the recipient? How does the recipient know that the sender key actually belongs to you (so it will validate correctly)?
As a followup, is there anything in existence that supports "large-scale public key management (with unknown recipients)"? Or "automatic discovery, trust management"? Even X.509 PKI at its most delusional doesn't claim to be able to do that.
It's not like GPG solves for secure key distribution. GPG keyservers are a mess, and you can't trust their contents anyways unless you have an out of band way to validate the public key. Basically nobody is using web-of-trust for this in the way that GPG envisioned.
This is why basically every modern usage of GPG either doesn't rely on key distribution (because you already know what key you want to trust via a pre-established channel) or devolves to the other party serving up their pubkey over HTTPS on their website.
Yes, not saying that web of trust ever worked. "Pre-established channel" are the other mechanisms I mentioned, like a central authority (https) or TOFU (just trust the first key you get). All of these have some issues, that any alternative must also solve for.
So if we need a pre-established channel anyways, why would people recommending a replacement for GPG workflows need to solve for secure key distribution?
This is a bit like looking at electric cars and saying ~"well you can't claim to be a viable replacement for gas cars until you can solve flight"
> The keys never leave the 1Password store. So you don’t have the keys on the local file system.
Keychain and 1Password are doing variants of the same thing here: both store an encrypted vault and then give you credentials by decrypting the contents of that vault.
The latest version of a bad standard is still bad.
This page is a pretty direct indicator that GPG's foundation is fundamentally broken: you're not going to get to a good outcome trying to renovate the 2nd story.
> The disk is fully encrypted, and applications should be isolated from one another.
For most apps on non-mobile devices, there isn't filesystem isolation between apps. Disk/device-level encryption solves for a totally different threat model; Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
> I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
Basically everything in PGP/GPG predates the existence of "corporate security circles".
> For most apps on non-mobile devices, there isn't filesystem isolation between apps.
If there isn't there should be. At least my Flatpaks are isolated from each other.
> Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
The Linux equivalents are suspicious and stuck in the past to say the least. Depending on them is extra tedious on top of the tediousness of any PGP keyrings, god forbid a combination of the two.
> Basically everything in PGP/GPG predates the existence of "corporate security circles".
Just a joke that if indeed GPG predates and was not inspired by corporate security theatre then the opposite must be true. That corporate security theatre was inspired by GPG/PGP.
A reminder that companies are not a hive mind.
Many people at Apple surely would love to funnel piles of money to open source. Maybe some of them even work in the Finance or Procurement or Legal departments. But the overwhelming majority of Apple’s procurement flow is not donations, and so it is optimized for the shape of the work it encounters.
I bet there are plenty of people working at Chick-fil-A who wish it was open on Sundays. But it’s not ~“blaming the user” to suggest that as it stands, showing up on Sunday is an ineffective way to get chicken nuggets.
reply