The old warez cracking scene had an outsize impact on computer security. GRSecurity, Heartbleed vulnerability, most reverse engineering tools for security, etc. etc. etc.
There's so much history here, touching on all sorts of insanity including selling 0-day to the US government that was then used to apprehend high-level Al-Qaida personnel, random warez busts leading to people taking oversea jobs, etc. etc. etc.
If anyone still has old .NFO archives from 1990-2000, I'd be very interested in getting as many as possible.
The privatization of the train system in Germany was a particularly insane disaster that is only now, 30 years later, being undone/repaired.
If you look at an org chart of the DB these days, the most fascinating part is that DB consists of almost 600 separate corporate entities that are all supposed to invoice each other.
Speaking with insiders, it appears that when the privatization happened, the new corporate structure took what was essentially every mid-size branch of the org chart and created a separate corporate entity, with cross-invoicing for what would normally normal intra-company cooperation. I think the (misguided) goal was to obtain some form of accountability inside a large organisation that had been state-funded and not good at internal accounting.
This fragmentation lead to insane inflexibility, as each of the 600 entities has a separate PnL and is loathe to do anything that doesn’t look good on their books.
Add to this a history of incompetent leadership (Mehdorn, who also ran AirBerlin into the ground, and who was also responsible for the disastrous BER airport build-out), repeated rounds of cost-cutting that prioritized “efficiency” over “resiliency of the network” etc. etc.
DB is currently undergoing a massive corporate restructuring to simplify the 600+ entity structure, but there has been a massive loss of expertise, underinvestment in infrastructure, poor IT (if you see a job ad for a Windows NT4 admin, it’s likely DB), etc. etc. — it’ll take a decade or more to dig the org out of the hole it is in.
It was a privatization in name only. The German state held 100% of its shares since the beginning. As such, it might have no longer been subject to the state specific demands of hiring etc. - but instead found itself in an uneasy tension as the only supplier of services to an entity that was something between a customer and a shareholder.
Which brings up an interesting question: How do you structure something with a large piece of infrastructure like a rail network in a way that could benefit from the market forces of competition and innovation?
> Which brings up an interesting question: How do you structure something with a large piece of infrastructure like a rail network in a way that could benefit from the market forces of competition and innovation?
A rail network is near to a natural monopoly. You can build overlapping rail networks, but it's complex and interconnecting instead of overlapping would usually offer better transportation outcomes and there's a lot less gauge diversity so interconnection is more likely than overlap.
All that to say, you can't really get market forces on the rails. Rails compete with other modes of transit, but roads and oceans and rivers and air aren't driven by market forces either.
Transit by rail does compete in the market for transit across modes. You can have multiple transportation companies running on the same rails, and have some market forces, but capacity constraints make it difficult to have significant competition.
> capacity constraints make it difficult to have significant competition
Thirty years ago, you would be correct. In the modern day, you could tie switch signalling to real-time auctions and let private rail's command centers decide how much to bid and thus whether or not they win the slot for putting their cars onto the shared rails. The public rail owner likely needs to set rules allowing passenger rail to pay a premium to secure slots in advance (say, a week) so that a timetable can be guaranteed to passengers during peak rush hour, but off-peak slots can and should be auctioned to naturally handle the difference between off-peak passenger rail and not-time-sensitive, more-cost-averse freight rail.
You can’t. Every attempt at privatizing rail is a failure with worse performance, higher prices, and an inevitable level of special treatment by the state due to the monopolistic utility-like nature of rail infrastructure. Not everything needs to or should be privatized.
Not, that "insight" again. Yes it was privatized and yes it is still completely owned by the state. "Privatization" is a term of art (in German) that refers to the corporate structure not the ownership. There are also public corporations in Germany, that are fully owned by random people: e.V. = registered association.
I believe modern economists are studying how ownership should be assigned. The thinking is that contracts and rules handle the majority of situations but emergencies and edge cases require an owner who has authority and whose interests align with the thing they control. And you want a mechanism to reassign ownership when the previous owner is incompetent.
In the case of a national train system, you may want to create a national entity to develop, coordinate, and make the physical trains and support technologies. You would create regional or metro entities to control the train network for their local area including the train stations. They coordinate with each other via negotiated contracts. Any edge cases or emergency falls under the purview of the owning entity. For example, the national entity controls the switch from diesel locomotives to the newest engine. The local authority is responsible for repairing the lines after a natural disaster.
If an entity is egregiously incompetent or failing, the national regulatory authority, with support of the majority of all the different train entities, takes control and reforms it.
There's been an ongoing issue with North Korean state agents infiltrating SV companies, and this proposal helps them pass the interview process more easily.
There's multipronged benefit for them: Access to company infrastructure to potentially cause harm or ransom in the future, access to technology / intelligence, but also simply foreign currency.
There's an event bigger number of Indian candidates trying to scam themselves into US jobs, than north Korean ones.
Especially when the recruiting process of big companies becomes predictable and well documented online, candidates will just perfect the targeting and cheating of that specific system.
What if the future just becomes in person interviews again, because every remote candidate will either be an Deepfaked scammer with a stolen ID, or a cheater with someone nearby whispering AI generated answers to him?
Employers are already including 'proof-of-life' checks on the low hanging fruit freelance sites such as Upwork. One example is literally having to get on a virtual call and obstruct your face with your hand or something similar to prevent passing any automated checks.
Xoogler here (2011-2018). It's heartwarming that a core part of Google culture ("for every problem we have 3 solutions: 2 that are deprecated and 1 that is experimental") is alive and well.
I only remember 2015 TF and I was wondering: why would I use Python to assemble a computational graph when what I really want is to write code and then differentiate through it?
The reason I'm negative is the entire article has zero detail on WTF this instruction set is or does. The best you can do is guess from the name of the instruction set.
Compare the linked iPhone article to this blog and you'll quickly see the difference. There's very real discussion in the MTE article of how the instructions work and what they do. This article just says "Memory safety is hard and we'll fix it with these new instructions that fix memory safety!"
So there's a long intellectual history behind these technologies, and Intel had multiple chances of taking the leadership on this around 2018 - they failed to do so, some of the talent went to Apple, and now Intel has to play catch-up.
I'm pretty certain it'll be the x86 variant of either MTE or MIE.
According to this: https://www.devever.net/~hl/ppcas the POWER approach is not a true hardware capability architecture (“nothing about these ISA extensions provides any kind of security invariant against a party which can generate arbitrary machine code”). It's just something that helps software to store one bit per 128 bits of data on the side (plus some other weirdness about load-with-offset instructions).
(SPARC ADI is similar, machine code is still trusted.)
Probably because it's very likely that both AMD and Intel have had engineers working on this sort of thing for a long time, and they're now deciding to collectively hash out whatever the solution is going to be for both of them.
A lot of these extensions come from Intel/AMD/etc clients first, and because of how long it takes a thing to make it into mainstream chips, it was probably conceived of and worked on at least 5 years ago, often longer.
This particular thing has a long history and depending on where they worked, they know about that history.
However, they are often covered by extra layers of NDA's on top of whatever normal corporate employee NDA you have, so most people won't say a ton about it.
I don't know if it is intended this way, but there's one useful outcome even with the limited amount of detail disclosed:
There are industry partners who work closely with AMD and Intel (with on-site partner engineers etc.), but who are not represented in the x86 ecosystem advisory group, or maybe they have representation, but not at the right level. If these industry partners notice the blog post and they think they have technology in impacted areas, they can approach their contacts, asking how they can get involved.
Yeah it's the most succinct explanation I've seen of weird machines and memory tagging. Definitely bookmarking this one. I wonder if video of the talk that presumably presented this is available.
Is there a comparison of memory tagging designs for different architectures (POWER, SPARC, CHERI/Morello, Arm MTE/eMTE, Apple MIE, x86, RISC-V)? e.g. enforcement role of compiler vs. hardware, opt-in vs mandatory, hardware isolation of memory tags, performance impact, level of OS integration?
The usual story they tell themselves is that the software is used against criminals and child pornography and terrorism. Which is not wrong, the majority of the use cases are probably that, in the majority of jurisdictions.
There's so much history here, touching on all sorts of insanity including selling 0-day to the US government that was then used to apprehend high-level Al-Qaida personnel, random warez busts leading to people taking oversea jobs, etc. etc. etc.
If anyone still has old .NFO archives from 1990-2000, I'd be very interested in getting as many as possible.