Hacker Newsnew | past | comments | ask | show | jobs | submit | OtherShrezzing's commentslogin

>2023: ~12,183 arrests

These numbers are for _all_ arrests under the Malicious Communications Act in that year. So while that category includes arrests for tweets, it also includes all arrests for any offensive communications via an internet-enabled device. So it'd include arrests for domestic abuse where at least one component of the abuse was through WhatsApp. Similarly, it can include just about any arrest where the crime was planned on an internet enabled device.


We’re the rules changed are this between those years though?

Cause if not a more than doubling is alarming regardless of how exactly the composition is sliced by online vs WhatsApp or whatever.


Sure, but it’s pretty hard to believe that the domestic violence arrests are increasing exponentially, isn’t it?

ETA:

> So it'd include arrests for domestic abuse where at least one component of the abuse was through WhatsApp.

Are you absolutely sure of this? It sounded good on the first read, but I’m very skeptical now. It seems to me that the arrest is going to be for battery, even if the charges filed later include the WhatsApp messages.


In the uk, during interview, you can only be questioned for offences you’ve been arrested for. So it’s common to get over-arrested, and later charged with the serious crime rather than the minor ones.

The overwhelming majority of people arrested under the communications act aren’t charged under it. They’re either released, or charged under a more serious offence.


The point is, communications should not be surveilled at all by the state. It shouldn't matter that the Internet is sometimes used to commit crimes, the bigger issue is that the vast majority of non-criminal traffic is subject to snooping.


What proof do you have that this is the result of surveillance rather than from responses to complaints?

I don't have proof, but systems are certainly designed to make this possible. And since it's possible, it is safe to assume that it is happening. (The Snowden leaks corroborate massive information sharing between "Big Tech" and the U.S. government, for example.) Hence you should categorically refuse to use anything Meta (that includes, among others, Facebook Messenger and Whatsapp), or Google, or Microsoft.

Do we know that was the case, for in those instances?

Could be that some guy threatened to kill someone over FB, someone saw that, and reported it.


Were they surveilled? Or simply read on someone's device after they were lawfully arrested, or sent to the police by the victim? You seem to be making a bit of a jump there

I’m not sure why you are being downvoted, this is a critically important point. Context is king, numbers alone are unhelpful


It's not that you can unredact them from scratch (you could never get the blue circle back from this software). It's that you can tell which of the redacted images is which of the origin images. Investigative teams often find themselves in a situation where they have all four images, but need to work out which redacted files are which of the origins. Take for example, where headed paper is otherwise entirely redacted.

So with this technique, you can definitively say "Redacted-file-A is definitely a redacted version of Origin-file-A". Super useful for identifying forgeries in a stack of otherwise legitimate files.

Also good for for saying "the date on origin-file-B is 1993, and the file you've presented as evidence is provable as origin-file-b, so you definitely know of [whatever event] in 1993".


Ok thanks. That sounds reasonable.

>... and therefore you can unredact them

from that readme is just not true then I guess?


I mean, even the "crop" isn't used at all correctly, is it?

I think the word should be "redact".


>but it turns out that not burning through VC cash on ping-pong tables and "growth at all costs" actually works.

This is at least a little disingenuous (or ill-thought-out), when you account for the fact the company is a spin-off/subsidiary of a large & successful Italian agency. While I'm certain these things helped keep the business sustainable, the fact of the matter is that the company was still incubated rather than bootstrapped. The only real difference is that it was incubated by its parent company, rather than by the VC industry.


Define “successful Italian agency” :) If breaking even every year with 20+ employees counts as successful, then yes—successful. But I think you may have a mistaken idea of the level of support and investment that the “incubating” company actually provided. With the limited effort I put into this product over the years before it started to work, all I really needed was any stable job and a few hours each week. You don’t need a particularly favorable setup to pursue a bootstrapped approach... but you do need to be very comfortable with the timeline for seeing results.


Reddit has been an absolute dumpster fire from the get-go. Its Wikipedia page has one of the largest “controversies” sections of any publicly listed company. Many of the controversies are so significant they have their own Wikipedia page.


Not wanting to particularly defend Reddit but a controversies section on a wikipedia page is hardly a good metric, in my opinion. Wikipedia is often used to malign various entities (and protect others).


I think this article is "why startups died pre-2020", rather than "why startups die [now]".

Lots of this article relates to the reasons startups died when cash was freely available - both from VCs and from the markets you were trying to find product in. For example, if you started an online learning company in March 2020, you'd have hit product right away (along with a thousand competitors), and been lathered with cash from every direction. But three years later, all of those startups were struggling, and I don't know of _any_ that survived. That's not a case of the business owners in 1000 discrete companies giving up. That's the entire world economy reverting back to in-person learning, and the disappearance of the ultra-low interest rates for the company to fall back on while it pivots.

In 2025, founders need to be acutely aware of exogenous factors, as they can be business-obliterating events without the social safety net of 0-1% IR.


Interesting opinion. Perhaps it's because I am a founder pre-2020 and lots of my thinking was shaped around that. What else do you think changed post-2020?


Why do you say pre-2020? 2020-2021 were the easiest times ever for funding with a lot of money-printing and zero rates (unlike 2018-2019!). Only in 2022 things started to get worse, and ground to a halt, at least for everyone outside of the AI bubble, from 2023. Now all who i know who are still operating, do so simply with the money raised before. Companies are going belly up left and right simply because that money runs out.


The site is back up, but it feels fairly silly that a platform that has inserted itself as a single point of failure has an architecture that's got single points of failure.

The other companies working at that scale have all sensibly split off into geographical regions & product verticals with redundancy & it's rare that "absolutely all of AWS everywhere is offline". This is two total global outages in as many weeks from Cloudflare, and a third "mostly global outage" the week before.


Crop monoculture created the potato famine. We failed to learn the larger lesson. "Hyperscale" is inherently dangerous.


Given that the author describes the company as prompt, communicative and professional, I think it’s fair to assume there was more contact than the four events in the top of the article.


The AI agents don’t appear to know how & where to be economically productive. That still appears to be a uniquely human domain of expertise.

So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.

It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.


i don't think you could find a single economist that believes humans know how and where to be economically productive


It’s a fundamental principle of modern economics that humans are assumed to act in their own economic interests - for which they need to know how and where to be economically productive.


humans are assumed to act, and some activities may generate consequences, to which a human may react somehow.

certainly there is a "survivor bias" but the rationality, long-term viability, and "economic benefit" of those activities and reactions is an open question. any judgement of "economic benefit" is arbitrary and often made in aggregate after the fact.

if humans knew how to create "economic benefit" in some fundamental and true way, game theory and most regulatory infrastructure would not exist, and i'm saying that as an anarchist.


While I think this is good advice in general, I don’t think your statement that “there is no process to create scalable software” holds true.

The uk gov development service reliably implements huge systems over and over again, and those systems go out to tens of millions from day 1. As a rule of thumb, the parts of the uk govt digital suite that suck are the parts the development service haven’t been assigned to yet.

The Swift banking org launches reliable features to hundreds of millions of users.

There’s honestly loads of instances of organisations reliably implementing robust and scalable software without starting with tens of users.


The uk government development service as you call it is not a service. It’s more of a declaration of process that is adhered to across diverse departments and organisations that make up the government. It’s usually small teams that are responsible for exploring what a service is or needs and then implementing it. They are able to deliver decent services because they start small, design and user test iteratively and only when there is a really good understanding of what’s being delivered do they scale out. The technology is the easy bit.


The UK Gov has many service and process docs [1]. It started out that way but has grown rapidly and changed. Including a library for authentication, frontend templates and libraries, custom docker images.

[1]: https://github.com/alphagov


UK GDS is great, but the point there is that they're a crack team of project managers.

People complain about junior developers who pass a hiring screen and then can't write a single line of code. The equivalent exists for both project management and management in general, except it's much harder to spot in advance. Plus there's simply a lot of bad doctrine and "vibes management" going on.

("Vibes management": you give a prompt to your employees vaguely describing a desired outcome and then keep trying to correct it into what you actually wanted)


> and those systems go out to tens of millions from day 1

I like GDS (I even interviewed with them once and saw their dev process etc) but this isn't a great example. Technically GDS services have millions of users across decades, but people e.g. aren't constantly applying for new passports every day.

A much better example I think is Facebook's rollout of Messenger, which scaled to billions of actual users on day 1 with no issues. They did it by shipping the code early in the Facebook app, and getting it to send test messages to other apps until the infra held, and then they released Messenger after that. Great test strategy.


GDS's budget is about £90 million a year or something. There are many contracts that are still spent on digital, for example PA consulting for £60 million (over a few years) which do a lot of the gov.uk home-office stuff, and their fresh grads they hire cost more to the government than GDS's most senior staff...


SWIFT? Hold my beer. SWIFT did not launch anything substantial since its startup days in early 70-ies.

Moreover, their core tech did not evolve that far from that era, and the 70-ies tech bros are still there through their progeniture.

Here's an anecdote: The first messaging system built by SWIFT was text-based, somewhat similar to ASN.1.

The next one used XML, as it was the fad of the day. Unfortunately, neither SWIFT nor the banks could handle 2-3 orders of magnitude increase in payload size in their ancient systems. Yes, as engineers, you would think compressing XML would solve the problem and you would by right. Moreover, XML Infoset already existed, and it defined compression as a function of the XML Schema, so it was somewhat more deterministic even though not more efficient than LZMA.

But the suits decided differently. At one of the SIBOS conferences they abbreviate XML tags, and did it literally on paper and without thinking about back-and-forth translation, dupes, etc.

And this is how we landed with ISO20022 abberviations that we all know and love: Ccy for Currency, Pmt for Payment, Dt for Date, etc.


Harder to audit when every payload needs to be decompressed to be inspected


Is it? No auditor will read binary, so you already need a preprocessing step to get it to a readable format. And if you're already preprocessing then adding a decompression step is like 2 lines tops.


I’d always wanted the World War 2 channel on YouTube to do something like this. They’ve produced incredibly actuate moving borders of every day of WWII for their videos. They’d be a useful historical tool if they were published as an interactive map.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: