There's a simple zero knowledge proof to show that you actually have the data. Have a CSV of username + salt + hashed(salt + email) + hashed(salt + phone number) , etc.
Users can check their own email/phone/etc to verify that the attacker has the data, without the attacker revealing the data.
Author here, I just noticed this was on HN! There’s lots of great points raised here that I didn’t cover. I published this with very little feedback, so I’m not surprised I missed things. I’ll go through some of the additional points, though:
* Pivoting towards enterprise. Not with speed in mind, but instead security/control/compliance.
We didn’t spend too much time here, so I can’t definitively say this wouldn’t have worked. Cloudflare had (has?) just this vision when they bought S2: https://blog.cloudflare.com/cloudflare-and-remote-browser-is.... There is at least one critical roadblock that I see: wifi and networks can be spotty. If only 80% of a company’s employees have good enough internet, what do you do as an administrator? Force them to figure out notoriously difficult wifi problems? If you don’t, those that don’t like the browser will simply not use it because they’re not required to. Given this, I always thought of this as a secondary market. First, make something great, independent of being required to use the product. Then start building out other tools that make businesses more enticed. We did sell to companies in multi-seat deals, and were eager to keep pushing in this direction. Note the tag line: “A new browser to work faster”.
* CAD/Rendering/Simulation/etc instead of a Browser
The trend is that all of these are moving to the browser. However, maybe not fast enough and Mighty was too early. It’s also a more crowded market (Citrix, Teradici, now Parsec, etc) and yet smaller than the browser market (well, by users at least).
* Powering browsers inside of mobile VR/AR
We never tried this. My sense is we’d be too early (at least 2+ years?).
* Accessibility, e.g. for screen readers
This was never a big enough priority but yeah, it seems solvable. It’s more or less another API to implement.
* Loading web pages faster is not going to make you more productive
As I see it, there are two buckets of speed. The first is to make fast things faster. The second is to make slow things faster. The two can work together. The real value prop is the second, but the first is where you can bring lots of delight. But I think there is some truth to this. Of the loyal paying users that we had, they felt substantially more productive. But could this benefit offset the price + downsides? Knowing what I know now, I don’t think so. But there’s a lot of context about what’s actually possible, what a wide spectrum of people value, etc that gets me to that conclusion.
* Who really, really, has a slow browser that’s willing to pay $35/month due to it?
This was my first thought when we starting working on a Browser. One thing I learned was to hold back my gut instinct and prove the answer, instead of guessing it. The empirical answer: thousands of people that we could find through minimal marketing (just Twitter, basically). Does that mean there are a million+ people out there that also have it? Maybe.. it’s hard to tell. But my personal hope was that this quantity generalized somewhat to the 2B users of Chrome so that we could at least make a profitable business. If we got there, we could move into areas where we were solving more problems.
So to directly answer the question: I’m pretty confident this market exists. But not if Mighty also has the downsides it did (doesn’t work well in cafes, a variety of bugs, etc.)
The questions are incredibly weak from the interviewers. They first state that it's not practical because the attack could take many hours, even days. But they don't describe why a day-long attack is not practical.
They then bring the researchers and ask them the same question. The researchers say that the attack is very practical because it only takes.. a few hours or days to execute the attack. Here's the specific part: https://youtu.be/BiRPr839dSU?t=1476
Instead of chatting more about this discrepancy they just ignore it and ask the researches how they feel about their new popularity.
From what I can tell from the advisory from Intel, it's simply that people should understand the attack and mitigate it in software. It's very vague. The specifics (i.e. a list of example popular programs that are vulnerable) seem entirely missing.
What you're seeing here is a collision between academic cryptography culture and real world engineering culture. In particular, the word "practical" has very different meanings in those two worlds, hence the discrepancy.
In engineering, the word "practical" has an expansive definition that takes into account end goals, likely costs, rewards and risks of getting there, whether better approaches exist and so on. In academic cryptography the word practical is used far more narrowly and means something like: this algorithm doesn't only exist on a whiteboard, we wrote a toy implementation of it as well.
There are people in this thread telling each other how to disable power scaling and stuff. They're probably people who take the claim of "real and practical" literally without realizing what this does(n't) mean when coming from academics. If you read the paper you'll notice a lot of aspects about the attack that aren't actually practical at all, so to believe this is a threat worth spending time on requires a lot of assumptions about unknown developments that may not hold.
To name just a few aspects of "practicality" that engineers might care about but the paper authors do not:
1. The attack requires DoSing the target server for extended periods, like days at a time, without being detected. Do you have CPU load or bandwidth monitoring in place? Then you're going to detect the attack within minutes of starting before it got anywhere at all and can simply block the attacking IPs.
2. The attack is only demonstrated against specific crypto libraries and algorithms that you're almost certainly not using. You're asked to assume it can be easily applied against normal algorithms, but their technique relies heavily on the exact mathematics and implementation schemes they're attacking, so it's not entirely obvious how easily it can be adapted. Presumably they chose this obscure target for a reason.
3. The attack was demonstrated on a perfectly unloaded system in which the server does nothing except cryptography and has no other users. Given how sensitive it is to tiny timing fluctuations, it seems like more or less any other activity would raise the noise level so much that days of DoS attacks might turn into months or years. You're asked to assume this isn't a problem for the attackers, but that seems like a very unsafe assumption.
4. The attack was demonstrated on a machine that's in the same datacenter as the machine being attacked (~600 microseconds of latency to the server). Are your machines in a private colo facility where the owners know who is renting their servers? Well then, the attackers are going to be pretty quickly detected and investigated by the authorities aren't they, because there are no valid use cases for DoSing a server right next to your own for days at a time with carefully crafted crypto packets.
5. What about the cloud? Pretty easy to get machines there, but you also can't control whereabouts you get placed. I read another paper where researchers tried to do remote timing attacks on machines in AWS. It requires massive amounts of descheduling and rescheduling VMs in the hope that eventually you get lucky and the scheduler places you near enough to the victim. That pattern is extremely distinctive, has no real legitimate use cases and AWS could very easily detect and it shut it down if this sort of attack ever became an actual problem. But of course, such obvious mitigations don't get mentioned in these papers.
6. Is this really the easiest way to snoop on traffic? Why not just search for a classical vuln in the client or server software itself? It's not like there's a shortage of those. Just weeks ago it turned out Jira was vulnerable because it was shipping a library last updated in 2005. If this attack is the best way to achieve a specific goal it means you're going up against an unusually well hardened target such that all other means of entry like phishing, hacking, government intervention, physical attack etc are less practical than this. Very few organizations will meet that level of security.
As you can see, once you expand the definition of "practical" to include consideration of everything a real attacker would care about end-to-end, like not being detected, and succeeding against real servers doing actual work that are monitored by humans, the whole thing starts to look very questionable indeed.
Frankly I find it a bit irresponsible that they've named it Hertzbleed. The original Heartbleed attack was quite practical and let you dump the memory contents of real world servers at will. People demoed it on random Cloudflare edge nodes and the like. It required an immediate response by many, many people. Now we have a website that looks nearly identical to the Heartbleed website - it has a similar name, a a logo, a similar FAQ, talk of "patches" by CPU vendors, etc. But when we read the paper there's no similarity between the attacks really. It's just another case of academics exaggerating their work for the sake of getting a paper and it needs to stop.
Wow, amazing response. This was exactly what I was looking for. It's odd I have to get someone from HN to help me understand instead of, say, Intel/AMD. Their recommendations didn't seem to mention any of these important details. Maybe I missed something. Thank you!
My experience has been that large companies won't directly argue with academic research, even when they easily could. Most people will automatically side with academics in any dispute, because they'll intuit that of course the company would say there's no real problem, they're conflicted, whereas the researchers aren't so the latter must be correct. Many people aren't too savvy about the publish-or-perish problem and don't care about the details. Corporate PR people also hate picking public fights, so tell staff to just roll with it and engage in damage control. After all, you're arguing with people who can literally spend all day writing up clever sounding papers about why their claimed problem is real, whereas you have customers to satisfy.
Yeah, I'd argue this is "practical" for state level surveillance. But they are GOING to get you if they want you, the various leaks over the years has shown that.
Heck, isn't spying on keyboards and display signals through a wall still "practical"?
Interestingly, AWS takes no such actions against massive scans of infrastructure. One can acquire millions of cloud servers in search of co-residency without action being taken.
Sure, probably there are no people mounting such attacks today.
My point was more like - the moment it becomes known that people are doing that sort of thing, they would implement mitigations. Sucks if you're literally the first victim who detects what happened, but that's not many people, especially because this sort of "flood the server with data and measure timing" attacks are so noisy and visible.
What's preventing Apple from stopping OpenHaystack from working? Is this simply a security vulnerability that will get fixed, or is there something inherent that may not be fixable? After all, Apple knows all the ids of the airtags they've created.
Woah that's a weird test case! fwiw now that you've posted this, this link shows up for that search. But similar searches like "gulp adimzip" show similar issues. Is this simply a bug? Clicking on Missing: adimzip" | Must include: adimzip" now makes Google search for gulp "adimzip""
I had a problem at work which required finding certain patterns in our log files. I figured that someone had built such a think (at least at a large log analysis company like Splunk), but I couldn't find anything.
I'm not sure how common of a problem this is for people, but I figured I'd try to build something to see if my solution even made sense. It turns out it works quite well.
Have you heard of problems this could solve, or projects like this? Please let me know! I put some decent effort into documenting the tool, but tell me if it's unclear.