Hacker Newsnew | past | comments | ask | show | jobs | submit | svara's commentslogin

> And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.

This I think I can explain, because I'm one of these people.

I'm not a programmer professionally for the most part, but have been programming for decades.

AI coding allows me to build tools that solve real world problems for me much faster.

At the same time, I can still take pride and find intellectual challenges in producing a high quality design and in implementing interesting ideas that improve things in the real world.

As an example, I've been working on an app to rapidly create Anki flashcards from Kindle clippings.

I simply wouldn't have done this over the limited holiday time if not for AI tools, and I do feel that the high level decisions of how this should work were intellectual interesting.

That said, I do feel for the people who really enjoyed the act of coding line by line. That's just not me.


> coding line by line

This phrase betrays a profoundly different view of coding to that of most people I know who actively enjoy doing it. Even when it comes to the typing it's debatable whether I do that "line by line", but typing out the code is a very small part of the process. The majority of my programming work, even on small personal projects, is coming up with ideas and solving problems rather than writing lines of code. In my case, I prefer to do most of it away from the keyboard.

If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.


It's not the typing, obviously, you're right. I think the parent is talking about it being an "intellectual exercise" to organize their thoughts about what they wanted to see as a result, whereas we who enjoy programming enjoy the exercise of breaking down thoughts into logical and algorithmic segments, such that no edge cases are left behind, and such that we think through the client's requirements much more thoroughly than they thought through them themselves. A physician might take joy in finding and fixing a human or animal malady. A roofer might take joy in replacing a roof tile, or a whole roof. But what job besides coding offers you the chance to read through the entire business structure of.. a lawyer, a doctor, a roofing company, a bakery.. and then decide how to turn their business into (a) a forward-facing, customer-friendly website, and (b) a lean data-gathering machine and (c) a software suite and hosting infrastructure and custom databases tailored to their exact needs, after you've gleaned those needs from reading all their financials and everything they've ever put out into the world?

The joy of writing code is turning abstract ideas into solid, useful things. Whether you do most of it in your head or not, when you sit down to write you will find you know how you want to treat bills - is it an object under payroll or clients or employees or is it a separate system?

LLMs suck at conceptualizing schema (and so do pseudocoders and vibe coders). Our job is turning business models into schemata and then coding the fuck out of them into something original, beautiful, and useful.

Let them have their fun. They will tire of their plastic toy lawnmowers, and the tools they use won't replace actual thought. The sad thing is: They'll never learn how to think.


> The sad thing is: They'll never learn how to think.

Drawing a sense of superiority out of personal choices or preferences is a really unfortunate human trait; particularly so in this case since it prevents you from seeing developments around you with clarity.


I agree with the person you're answering. LLM-assisted coding is like reading a foreign language with a facing translation: most students who do this will make the mistake of thinking they've translated and understood the original text. They haven't. People are abysmal at maintaining an accurate mental accounting of attribution, authorship, and ownership.

> If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically.

So I take it you don't let coding agents write your boilerplate code? Do you instead spend any amount of time figuring out a nice way to reduce boilerplate so you have less to type? If that is the case, and as intellectually stimulating as that activity may be, it probably doesn't solve any business problems you have.

If there is one piece of wisdom I could impart, it's that you can continue enjoying the same problem solving you are already doing and have the machine automate the monotonous part. The trick is that the machine doesn't absorb abstract ideas by osmosis. You must be a clear communicator capable of articulating complex ideas.

Be the architect, let the construction workers do the building. (And don't get me started, I know some workers are just plain bad at their jobs. But bad workmanship is good enough for the buildings you work in, live in, and frequent in the real world. It's probably good enough for your programming projects.)


From the way you describe it, our process does not sound that different, except that this

> If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.

... is exactly how this often works for me.

If you don't get any value out of this at all, and have worked with SOTA tools, we must simply be working in very different problem domains.

That said I have used this workflow successfully in many different problem domains, from simple CRUD style apps to advanced data processing.

Two recent examples to make it more concrete:

1) Write a function with parameter deckName that uses AnkiConnect to return a list of dataclasses with fields (...) representing all cards in the deck.

Here, it one-shots it perfectly and saves me a lot of time sifting through crufty, incomplete docs.

2) Implement a function that does resampling with trilinear interpolation on 3d instance segmentation. Input is a jnp array and resampling factor, output is another array. Write it in Jax. Ensure that no new instance IDs are created by resampling, i.e. the trilinear weights are used for weighted voting between instance IDs on each output voxel.

This one I actually worked out on paper first, but it was my first time using Jax and I didn't know the API and many of the parallelization tricks yet. The LLM output was close, but too complex.

I worked through it line by line to verify it, and ended up learning a lot about how to parallelize things like this on the GPU.

At the end of the day it came out better than I could have done it myself because of all the tricks it has memorized and because I didn't have to waste time looking up trivial details, which causes a lot of friction for me with this type of coding.


>> AI coding allows me to build tools that solve real world problems for me much faster.

If you can't / won't / don't read and write the code yourself, can I ask how you know that the code written for you is working correctly?


I do read it. In my experience the project will quickly turn into crap if you don't. You do need to steer it at a level of granularity that's appropriate for the problem.

Also, as I said, I've been coding for a long time. The ability to read the code relatively quickly is important, and this won't work for early novices.

The time saving comes almost entirely from having to type less, having to Google around for documentation or examples less, and not having to do long debugging sessions to find brainfart-type errors.

I could imagine that there's a subset of ultra experienced coders who have basically memorized nearly all relevant docs and who don't brainfart anymore... For them this would indeed be useless.


I mean, I'm curious what kind of code it's saving you time on. For me, it's worse than useless, because no prompt I could write would really account for the downwind effects in systems that have (1) multiple databases with custom schema, (2) a back-end layer doing user validations while dispatching data, (3) front-end visual effects / art / animation that the LLM can't see or interpret, all working in harmony. Those may be in 4 different languages, but the LLM really just can't get a handle on what's going on well enough. Just ends up hitting its head on a wall or writing mostly garbage.

I have not memorized all the docs to JS, TS, PHP, Python, SCSS, C++, and flavors of SQL. I have an intuition about what question I need to ask, if I can't figure something out on my own, and occasionally an LLM will surface the answer to that faster than I can find it elsewhere... but they are nowhere near being able to write code that you could confidently deploy in a professional environment.


I’m far more in the camp of not AI than pro LLM but I gave Claude the HTML of our jira ticket and told it we had a Jenkins pipeline that we wanted to update specific fields on the ticket of using python. Claude correctly figured out how we were calling python scripts from Jenkins, grabbed a library and one shorted the solution in about 45 seconds. I then asked it to add a post pipeline to do something else which it did, and managed to get it perfectly right.

It was probably 2-3 hours work of screwing around figuring out issue fields, python libraries, etc that was low priority for my team but causing issues on another team who were struggling with some missing information. We never would have actually tasked this out, written a ticket for it, and prioritised it in normal development, but this way it just got done.

I’ve had this experience about 20 times this year for various “little” things that are attention sinks but not hard work - that’s actually quite valuable to us


> It was probably 2-3 hours work of screwing around figuring out issue fields

How do you know AI did the right thing then? Why would this take you 2-3 hours? If you’re using AI to speed up your understanding that makes sense - I do that all the time and find it enormously useful.

But it sounds like you’re letting AI do the thinking and just checking the final result. This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.


> How do you know AI did the right thing then?

Because I tested it, and I read the code. It was only like 40 lines of python.

> Why would this take you 2-3 hours?

It's multiple systems that I am a _user_ of, not a professional developer of. I know how to use Jira, I'm not able to offhand tell you how to update specific fields using python - and then repeat for Jenkins, perforce, slack. Getting credentials in (Claude saw how the credentials were being read in other scripts and mirrored that) is another thing.

> This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.

As I said above, it's 30 lines of code. I did put my name beind it, it's been running on our codebase on every single checkin for 6 months, and has failed 0 times in that time (we have a separate report that we check in a weekly meeting for issues that were being missed by this process). Again, this isn't some massive complicated system - it's just glueing together 3/4 APIs in a tiny script in 1/10 of the time that it took me to do it. Worst case scenario is it does exactly what it did before - nothing.


Hah, even the concept of putting your name behind something is so great. It's kind of the ultimate protest against LLMs and social media, isn't it?

I've used it for minor shit like that, but then I go back and look at the code it wrote with all its stupid meandering comments and I realize half the code is like this:

const somecolor='#ff2222'; /* oh wait, the user asked for it to be yellow. Let's change the code below to increase the green and red /

/ hold on, I made somecolor a const. I should either rewrite it as a var or wait, even better maybe a scoped variable! /

hah. Sorry I'm just making this shit up, but okay. I don't hire coders because I just write it myself. If I did, I would assign them all* kinds of annoying small projects. But how the fuck would I deal with it if they were this bad?

If it did save me time, would I want that going into my codebase?


I've not found it to be bad for smaller things, but I've found once you start iterating on it quickly devolves into absolute nonsense like what you talked about.

> If it did save me time, would I want that going into my codebase?

Depends - and that's the judgement call. I've managed outsourcers in the pre-LLM days who if you leave them unattended will spew out unimaginable amounts of pure and utter garbage that is just as bad as looping an AI agent with "that's great, please make it more verbose and add more design patterns". I don't use it for anything that I don't want to, but for so many things that just require you to write some code that is just getting in the way of solving the problem you want to solve it's been a boon for me.


I've also not had great experiences with giving it tasks that involve understanding how multiple pieces of a medium-large existing code base work together.

If that's most of what you do, I can see how you'd not be that impressed.

I'd say though that even in such an environment, you'll probably still be able to extract tasks that are relatively self contained, to use the LLM as a search engine ("where is the code that does X") or to have it assist with writing tests and docs.


Your conclusion is spot on. Fuzz generators excel at fuzzy tasks.

"Convert the comments in this DOCX file into a markdown table" was an example task that came up with a colleague of mine yesterday. And with that table as a baseline, they wrote a tool to automate the task. It's a perfect example of a tool that isn't fun to write and it isn't a fun problem to solve, but it has an important business function (in the domain of contract negotiation).

I am under the impression that the people you are arguing with see themselves as artisans who meticulously control every bit of minutiae for the good of the business. When a manager does that, it's pessimistically called micromanagement. But when a programmer does that, it's craftsmanship worthy of great praise.


Because it does what I want it to do?

Not sure how this is so hard to understand. If you have closed source software, how do you know its's working?


Same way you test code you wrote by hand. In-place and haphazardly, until you have it write unit tests so you can have it done more methodically. If it hallucinates a library or function that doesn't exist, it'll fail earlier in the process ; compilation).

I've used Claude to write code, and it is much harder to test that code than it is to test code "haphazardly" as I write it myself. Reason being, I can test mine after each new line I write and make sure that line is doing what I intend it to do. After Claude writes a whole set of functions, it could take hours to test all the potential failure modes.

BTW, if it doesn't take you hours to test the failure modes, you're not thinking of enough failure modes.

The time savings in writing it myself has a lot to do with this. Plus I get to understand exactly why each line was written, with comments I wrote, not having to read its comments and determine why it did something and whether changing that will have other ramifications.

If you're doing anything larger than a sample React site, it's worth taking the time to do it yourself.


Well, you could also generate the tests by CC, check them to make sure they’re legitimate, then let it implement it?

The main key in steering Claude this month (YMMV), is basically giving tasks that are localized, can be tested out and not too general. Then you kinda connect the dots in your head. Not always, but you can kinda get gist of what works and what doesn’t.


> AI coding allows me to build tools that solve real world problems for me much faster.

But it can't actually generate working code.

I gave it a go over the Christmas holidays, using Copilot to try to write a simple program, and after four very frustrating hours I had six lines of code that didn't work.

The problem was very very simple - write a bit of code to listen for MIDI messages and convert sysex data to control changes, and it simply couldn't even get started.


I'm sure someone is about to jump in and tell you why you're doing it wrong, but I'm in a similar position to you. I spent the last few days using the AI to help me pull together evidence for our ISO audit and while it didn't do a bad job, it was rife with basic errors. Simple things like consistently formatting a markdown document would work 9/10 times with the other time having it ignore the formatting, or deciding to rewrite other bits of the document for no reason.

Yeah, unfortunately quality of tooling varies heavily. Like a range of producing garbage, to working code. Claude Code got significantly good in the last couple of months, and it’s been noticeable. I’ve been trying to plug LLMs into my workflow throughout the year, to make sure I don’t fall behind the industry. And this last month was it when it “clicked”. It works in large and small projects as long as you kinda know how to localize the tasks.

I know "try this other tool" is probably an eye-roll-worthy response, but as someone who's not a programmer but is in IT and has to write some scripts every once in a while and has a lot of AI-heavy dev friends - all I've ever heard about Copilot is that it's one of the worst.

I recently used Claude for a personal project and it was a fairly smooth process. Everyone I know who does a lot of programming with AI uses Claude mostly.


You wrote out a fantasy here that says more about what content you seek out than anything else.

Should be on the lookout for major upcoming domestic news they're trying to bury.

It doesn't have to be upcoming. This is still a distraction from Epstein.


Great neologism. We should also have vibeflation, the disconnect between the bullshit inflation figures published by politicians and the real inflation people have been seeing in the past few years.

You can determine statistically whether you have found a block relatively early, and conversely whether other miners are unlikely to find one soon.

So you can get a head start on the next block from the likely new head block you've found.

It only works on average of course, you might be the one wasting resources if someone else published a block while you're withholding yours, but the trick is for you to gain an edge on average.

Now what happens if everyone is doing that calculation? That's where you need to do the game theory analysis (which I haven't and don't claim to understand).


> You can determine statistically whether you have found a block relatively early, and conversely whether other miners are unlikely to find one soon.

Finding a block relatively early doesn't affect the odds of others finding a block soon. The odds are always the same, each hash is an independent event.

I don't see why withholding would get you an edge on average. If the others find a block while you're withholding, you lose your reward. If you find another block before them, you get the rewards of 2 blocks, exactly like if the same happened but you didn't withhold.

The only way for you to have an advantage is if you find a 2nd block at the same time as another one finds one on the other chain. You can then publish a height of 2 vs a height of 1, so you win. But to do that you have to first put your first block reward at high risk by withholding it. I don't think the odds are in your favor here.


Yeah, I was thinking about this wrong. I don't think it works.

Edit: I think the strategy does work, but a little differently: if you withhold a block and someone else finds one while you do so, you can still publish yours and win a race with a certain probability, i.e. the expected loss is not as high as one might think.

Then, if you do that and if you have enough hash power, you can end up mining a private chain ahead of the public one often enough, so that the loss you take is less than the loss others take through the hash power they are wasting because of you doing this.


> This sentence assumes a certain degree of shared prosperity. I think this is increasingly an illusion. IMO, Social media tends to create filter bubbles which create illusions of shared prosperity

I think it's exactly the other way around? Wealth inequality (in the US, as an example) has actually not drastically changed in the past few decades, but I do agree the perception of unfairness has increased a lot.

My hunch is that everyone is now being fed wealth porn on social media and comparing themselves to influencers or actual billionaires who actually do live or pretend to live a .01%er lifestyle.

Life's never been fair; but feeling shortchanged for living a solid middle class lifestyle because Bezos has a big yacht seems new.

Ultimately it all feels depressingly materialistic to me. Go work on something actually meaningful!


Cooperation has been "invented" in evolution many times independently and is long term stable in many species.

If your comment was true that fact wouldn't exist.

We may consider the world we live in today competitive, but at the end of the day, humanity is a globe spanning machine that exists due to cooperative behavior at all scales.

Comments such as yours are really missing the forest for the trees.

I suspect that it's really the fact that cooperation is so powerful and pervasive that makes it normal to the point where any deviation from it feels outrageous.

So you focus on the outrageous due to availability bias (seeing the trees rather than the forest).


You seem to be misunderstanding the GP.

Evolution does not work maximizing individual success.


> Evolution does not work maximizing individual success.

Yes it does. In fact, unless you want to get nit-picky about intra-gene, inter-allele selection, that is _exactly_ what it does.


But it does? What do you think it optimizes other than individual fitness?

I think I understand the GP pretty well. Cheating, or defection in the language of evolutionary theory, is subject to frequency based selection, meaning it is strongly selected against if its frequency is too high in the population. It's not a stable strategy.

It can be a winning strategy for a few individuals in a cooperative environment, yes, but it breaks down at a point because the system collapses if too many do it.

And yet, cooperative systems are common and stable, which is my point.


>What do you think it optimizes other than individual fitness?

Chance to pass genes forward. This is only equivalent to individual fitness for very solitary species and humans aren't.

As an extreme example, take soldier termites - their chance to pass their genes is zero, but the chance for the colony to survive grows. Also gay people exist (they also - usually - don't reproduce, but help others instead).

Humans naturally care about their family and tribe because this increases the chance of their bloodline to survive.


That's a distinction without a difference. Worker ants have high individual fitness if their colony successfully reproduces because they pass their genes forward.

In evolutionary theory this is made clear by using the term "inclusive fitness" - worker ants actually pass their genes on to future generations more effectively by taking the detour, if you will, through the queen.

If you want to be nitpicky and argue we should consider the individual gene the unit of selection, as Dawkins famously argued, I'm not going to disagree, you can see it that way too.

That specific distinction very rarely leads to different predictions though.


A world where everyone is a Giver is not a stable world. Ask Gemini or Claude to explain. Cheating by definition works only in minority. If everyone is in line to buy tickets, only few cheaters can get early tickets and it is a stable strategy. But everyone is a cheater, everyone is worse off.

As far as DB goes, I'm pretty sure it's mostly an issue of systemic technical and consequently social collapse.

The system runs beyond its limits and consequently the culture collapses because the people inside learn they have no agency.

The German rail network is quite good on paper, with dense and high frequency connections even to relatively remote locations.

But keeping that functional (particularly with constantly rising demand) requires far more investment than it receives.

All the examples of great rail systems (France, Switzerland, Japan) are both simpler in network structure and invest more relative to their passenger load.


The privatization of the train system in Germany was a particularly insane disaster that is only now, 30 years later, being undone/repaired.

If you look at an org chart of the DB these days, the most fascinating part is that DB consists of almost 600 separate corporate entities that are all supposed to invoice each other.

Speaking with insiders, it appears that when the privatization happened, the new corporate structure took what was essentially every mid-size branch of the org chart and created a separate corporate entity, with cross-invoicing for what would normally normal intra-company cooperation. I think the (misguided) goal was to obtain some form of accountability inside a large organisation that had been state-funded and not good at internal accounting.

This fragmentation lead to insane inflexibility, as each of the 600 entities has a separate PnL and is loathe to do anything that doesn’t look good on their books.

Add to this a history of incompetent leadership (Mehdorn, who also ran AirBerlin into the ground, and who was also responsible for the disastrous BER airport build-out), repeated rounds of cost-cutting that prioritized “efficiency” over “resiliency of the network” etc. etc.

DB is currently undergoing a massive corporate restructuring to simplify the 600+ entity structure, but there has been a massive loss of expertise, underinvestment in infrastructure, poor IT (if you see a job ad for a Windows NT4 admin, it’s likely DB), etc. etc. — it’ll take a decade or more to dig the org out of the hole it is in.


It was a privatization in name only. The German state held 100% of its shares since the beginning. As such, it might have no longer been subject to the state specific demands of hiring etc. - but instead found itself in an uneasy tension as the only supplier of services to an entity that was something between a customer and a shareholder.

Which brings up an interesting question: How do you structure something with a large piece of infrastructure like a rail network in a way that could benefit from the market forces of competition and innovation?


> Which brings up an interesting question: How do you structure something with a large piece of infrastructure like a rail network in a way that could benefit from the market forces of competition and innovation?

A rail network is near to a natural monopoly. You can build overlapping rail networks, but it's complex and interconnecting instead of overlapping would usually offer better transportation outcomes and there's a lot less gauge diversity so interconnection is more likely than overlap.

All that to say, you can't really get market forces on the rails. Rails compete with other modes of transit, but roads and oceans and rivers and air aren't driven by market forces either.

Transit by rail does compete in the market for transit across modes. You can have multiple transportation companies running on the same rails, and have some market forces, but capacity constraints make it difficult to have significant competition.


> capacity constraints make it difficult to have significant competition

Thirty years ago, you would be correct. In the modern day, you could tie switch signalling to real-time auctions and let private rail's command centers decide how much to bid and thus whether or not they win the slot for putting their cars onto the shared rails. The public rail owner likely needs to set rules allowing passenger rail to pay a premium to secure slots in advance (say, a week) so that a timetable can be guaranteed to passengers during peak rush hour, but off-peak slots can and should be auctioned to naturally handle the difference between off-peak passenger rail and not-time-sensitive, more-cost-averse freight rail.


You can’t. Every attempt at privatizing rail is a failure with worse performance, higher prices, and an inevitable level of special treatment by the state due to the monopolistic utility-like nature of rail infrastructure. Not everything needs to or should be privatized.

This 100%. It should be seen as critical infrastructure because of everything it can enable when run well.

> It was a privatization in name only.

Not, that "insight" again. Yes it was privatized and yes it is still completely owned by the state. "Privatization" is a term of art (in German) that refers to the corporate structure not the ownership. There are also public corporations in Germany, that are fully owned by random people: e.V. = registered association.


I believe modern economists are studying how ownership should be assigned. The thinking is that contracts and rules handle the majority of situations but emergencies and edge cases require an owner who has authority and whose interests align with the thing they control. And you want a mechanism to reassign ownership when the previous owner is incompetent.

In the case of a national train system, you may want to create a national entity to develop, coordinate, and make the physical trains and support technologies. You would create regional or metro entities to control the train network for their local area including the train stations. They coordinate with each other via negotiated contracts. Any edge cases or emergency falls under the purview of the owning entity. For example, the national entity controls the switch from diesel locomotives to the newest engine. The local authority is responsible for repairing the lines after a natural disaster.

If an entity is egregiously incompetent or failing, the national regulatory authority, with support of the majority of all the different train entities, takes control and reforms it.


keep the rails as a state-owned monopoly, let different train operators run on it. Basically we have that for airplanes, and it works well enough.

>invest more relative to their passenger load.

For Switzerland does this account for the almost double salaries or only absolute spending?

If you spend 1€ in Switzerland I imagine you get much less work output than for 1€ in Germany.


Raw investment numbers don't necessarily matter, but the productivity of said number. Even if things are more expensive in Switzerland, if they make efficient use of said investment, then it can work out ok (or even better).

I have no idea if this is actually the case, but you have to take that into account or Switzerland would not be as successful as it is. Higher incomes have historically been a symptom of productivity (and while median incomes and productivity have decoupled, especially in the angosphere, it is still usually correlated).


>Higher incomes have historically been a symptom of productivity

If I go to Zürich I get a burger for 30Fr that I can get in Southern Germany for 15€ and in Berlin for 8€. That is with roughly the same quality.

I'd say past productivity leads to network effects and investments in one area that boost local salaries and decouples them quite strongly from current productivity.

My previous company had a per-dollar extremely unproductive location in silicon valley. The people there weren't at fault. You don't magically become more productive because you live next to SF.


That's the crux: we must invest in trains instead of planes.

I have no idea how planes are the dominant form of transport for relatively short routes (like within the bounds of a large country or to an adjacent one) and how even in Europe the train networks can be a bit of a mess.

Like surely it’s easier to run a railway network when compared to the insane complexity to safely operate an airport and all the work that goes into plane maintenance and pilot training and so on.


You need a lot of infrastructure for trains (and a lot of it isn't even used all that much -- it's not like all rails have a train passing by every 5 minutes). You also can't get much use out of your rolling stock because the speeds are fairly slow. You also don't have the same flexibility as planes have regarding routes.

The upshot is that trains are a lot costlier than most believe think and most railway routes require state subsidies (with goods transport usually being an exception), whereas air traffic works so well it can be taxed heavily.


Air traffic is not taxed heavily compared to other modes of transport - on the contrary, it is very heavily subsidized (at least in Europe): Regional airports often strongly depend on state subsidies, airlines are exempt from petroleum taxes, flight tickets are VAT-exempt.

In Germany (and also e.g. Switzerland), long-distance trains are expected to run either at cost (or make a profit). Short-distance trains (regional transport) are usually subsidized.


Another factor is that building new rail lines requires eminent domain and acquiring land across multiple jurisdictions etc.

Why not invest in a vast 24/7 high frequency electric bus network instead of the big infrastructure costs of trains?

Sounds neat but what kind of range limits would that impose on each trip? Switching from one means of transportation to another, even if both are buses, increases the total travel time significantly. Not to mention all the hassle involved for passengers.

trains can be superfast. For example a tgv from strassbourg to marseille is 5-6h. Same with car is for me 8h. Bus is even slower so I would wildy guess 12h. Plan is btw 1 1/2 hour.

Why?

Planes are faster, and there is actual competition keeping prices down. There is no competition on railroads, no accountability, no nothing. More importantly, railroads have to be managed centrally to work. And this makes them overwhelmingly complex, resulting in an ever-growing bureaucracy.

Air travel is decentralized, and while individual airports (cue: BER) can get screwed up, it doesn't cascade through the whole system.

We just need to add a bit of carbon pricing to reflect the true price of flights.


No amount of money will overcome the fundamental issue: monopoly.

Airlines are subject to market competition since any competitor around the globe can spot a poorly run route and buy their planes into those slots. If they can execute more efficiently than you, they can afford to lower prices (or increase the level of service) more than you, and thus put you out of business.

Trains do not work this way. No amount of investment can overcome the cushy institutional-rot, laziness, and demotivation that inevitably results from being a monopoly, as most train routes are not subject to competitive forces due to the real world constraints of the infrastructure needed.


France, Italy, Austria (and probably others) don't have monopoly on long distance train. For instance, you can take a DB/Renfe/Trenitalia train on french high speed line, or in Austria take a Westbahn train instead of ÖBB.

That said personally I much prefer the mostly fixed pricing (and no reservation required) of swiss network than the dynamic one of other countries.


It doesn't really work all that well. The "everything above the rail" model suffers from the fundamental problem that "everything above the rail" is just a minor component in the overall cost of rail.

Yes, that's the same as for roads/highways, those are better publicly managed.

China and Switzerland seem to do fine with trains.

The German rails network went downhill when they decided to socialize the losses and privatize the profit. Failure is blamed on the grunt workers, which are absolutely not interested in taking responsibility as a result of this. The fact that there are rotting railways everywhere and the DB waits until it gets so bad for cities to step in and take over part of the cost is a wonderful example of this. The new ICE's speed is actually lower than previous generations.

I have seen this systemic problem in other domains I worked in. The problems are very similar, and at the end of the day I can somewhat relate to the workers attitude of "why should I lean out of the window if I get punished anyway". But in some cases the workers are unfireable and oftentimes it is exactly that attitude that let the management get away with the terrible working conditions (most of the times more psychological than physical abuse) so it feeds into each other.


Just an aside, as a railway-nerd:

> The new ICE's speed is actually lower than previous generations.

While not the fastest ICE, the new ICE-L (assuming you refer to it) with a top speed of 230km/h, is not actually slower than what it is supposed to replace on most routes: InterCity trains, topping out at 200km/h.

ICE-L, btw, was planned to be a IC train, but just like before with IC-T/ICE-T (same top speed of 230km/h), and IC X (ICE 4), DB management has a tendency to decide next-to-last minute, that new vehicles must earn money and thus get rebranded ICE, which is both more prestigious and (at least in a fictional world without "Sparpreis") pricey.

TL;DR: This would be outrageous if ICE-L was to replace ICE 3 (neo; 320km/h +) services - but it is not.


Yeah, I didn't feel like looking up the exact details, so thank you for adding that. I didn't know that it was rebranded like that, I was just baffled at the outcome. Our mechanical engineering professor was responsible for the ICE breaking system a long time ago and those guys were all extremely good.

The other aspect is that there is a whole host of periphery issues, one of which is track maintenance, making it so for a lot of segments the ICE will not reach its top speed.


Could be. It used to be that to get phone service in Germany could take up to a month after putting in the order, that’s when it was state controlled. After the reforms installations were quicker.

So to me, there doesn’t seem to be a panacea except to hold the services accountable in some way.


That's a different situation / scenario and addressed a different problem.

The government is the most efficient and effective at big capital spending and with what I would call static operations. Competitive private entities are the best at delivering value on the front-end.

Monopolist/cartel private entites combine the rapacious nature of rent seeking with the lazy inefficiency of bureaucracy to great a giant ball of failure. Effective privatization requires either creating a framework for a robust competitive landscape OR tight, effective regulatory control. There's no universal correct answer.

If competition is in place and companies can win or lose, they will move mountains to yield marginal gain. If you let them get fat & lazy, you will need to move a mountain to do anthing -- even make more money!


> If competition is in place and companies can win or lose, they will move mountains to yield marginal gain.

... in the short term, happily screwing over society at large and possibly even themselves in the medium to long term. Perverse incentives are everywhere.


> After the reforms installations were quicker.

And everybody has the same "market" price.


Add to that the transport buisness beeing marginal to the company who is mainly a immo speculation company trying to sell the strips of inner city land they hold.

Immo being real estate (Immobilien), for the curious.

It's a big, systemically failing organization running way beyond its capacity. Failures rippling through and compounding in a tightly coupled rail network.

If they weren't able to announce the train would stop at one station, why do you think they'll be able to do that at another?

I'm pretty sure train conductors aren't allowed to just stop somewhere unscheduled for good reasons, there's always a train behind and in front of them with no buffer.


train conductors are not controlling the train. that is done from a central (regional) control center that manages all trains of the region. only there someone decides where trains go or stop

Usually the train driver is in radio contact with central control and can request changes to the points, signals etc so they can make unscheduled stops. For example if there's a medical emergency on board and a passenger needs to be transferred to an ambulance.

Of course doing this can have ripple effects on other services, and if a common factor has severely delayed dozens of different trains, the central control room might not have enough staff to deal with dozens of unscheduled stop requests.


Also everyone complains about punctuality. A train stopping "nilly willy" somewhere it's not scheduled can very much cause exactly that in such a tightly scheduled system. So if you can, you avoid it.

Since we don't know "the other side of the story", we can't really tell. All people here see is the "I got kidnapped". If the story was written from the control room person's perspective, they might write a fascinating story about how they single-handedly avoided 17 trains being late by sending one train on a detour.

Would be awesome if there was someone on HN that knows if DB actually has the capacity to run a scheduling algorithm for their network within a few minutes, repeatedly, for many different trains at a moments notice. What kind of infra do they have for that, what do they use? With a large, interconnected, network that's tightly scheduled already that can't be easy.

OP was also unlucky in that he was on a regional train. They prioritize long distance trains usually as a regional train can more easily wait on a lower speed limit track somewhere than a fast long distance train on a potentially shared single track bottleneck.


Best example of this is when they hit someone. Train has to stop, control center has no say in it.

(For longer “technical” delays, keep an eye out for emergency vehicles without their sirens on.)


> Train has to stop, control center has no say in it.

And then you have cascading delays across a whole region.


Yes that was my point.

But mapping raw values to screen pixel brightness already entails an implicit transform, so arguably there is no such thing as an unprocessed photo (that you can look at).

Conversely the output of standard transforms applied to a raw Bayer sensor output might reasonably be called the "unprocessed image", since that is what the intended output of the measurement device is.


Would you consider all food in existence to be "processed", because ultimately all food is chopped up by your teeth or broken down by your saliva and stomach acid? If some descriptor applies to every single member of a set, why use the descriptor at all? It carries no semantic value.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: