I guess I will never understand the microservices vs monolith debate. What about just "services"? There are 1001 reasons you might want to peel off functionality into a separate service. Just do that, without making it into some philisophical debate.
I get that Gas Town is part tongue-in-cheek, a strawman to move the conversation on Agentic AI forward. And for that I give it credit.
But I think there's a real missed opportunity here. I don't think it goes far enough. Who wants some giant complex system of agents conceived by a human. The agents, their role and relationships, could be dynamically configured according to the task.
What good is removing human judegment from the loop, only to constrain the problem by locking in the architecture a priori. It just doens't make sense. Your entire project hinges on the waterfall-like nature of the agent design! That part feels far too important, but gas town doesn't have much curiousity at all about changing that. These Mayors, and Polecats, and Witnesses, and Deacons ... but one of infinite ways you arrange things. Why should there be just one? Why should there be an up-front design at all? A dynamic, emergent network of agents feels like the real opportunity here.
Speaking only of written communication here: I've noticed a distinct trend of people stopping documentation, comments, release notes, etc. intended for human consumption and devoting their writing efforts to building skills, prompts, CLAUDE.md intended for machines.
While my initial reaction was dystopian horror that we're losing our humanity, I feel slightly different after sitting with it for a while.
Ask yourself, how effective was all that effort really? Did any humans actually read and internalize what was written? Or did it just rot in the company wiki? Were we actually communicating effectively with our peers, or just spending lots of time on trying to? Let's not retcon our way to believing the pre-AI days were golden. So much tribal knowledge has been lost, NOT because no one documented it but because no one bothered to read it. Now at least the AI reads it.
For me, the loneliest period of my life was when I was socially active but hanging out with people that I didn't really like or respect. Don't neglect spiritual and mental health as a strong component of loneliness. It's not always about dragging your body from one event to another to maximize the number of people in your life. You have to make sure your mind, body and spirit are present and aligned.
It's remarkable how these papers show a deep understanding of programming 50 years ago. Even with anemic hardware, the limit is always in the programmers brain - as uncomfortable as that is to admit. Half a century of new tech and AI and the cloud etc, we still hit "terminal trauma" fairly quickly in the development cycle, almost like clockwork. All the tools and technical tricks don't seem to matter vs. our ability to hold the application in our heads.
I don't know what it is about observability that brings out the over-engineering in us. I haven't actually measured it but I suspect many startups doing things the "modern" way (ie logs, distributed traces, metrics, ci/prod/stage/dev environemts etc going to a dozen different services) generate more metadata than actual data. I mean, we need to observe your systems but ultimately that data is a second class citizen to what we're observing in the first place, the application.
At the most absurd, I've seen observability systems replicated across 3 data centers for an application that hadn't been built yet. But don't worry, by the time it was released, they'd have an observability system so good it couldn't fail (narrator: "It did, in fact, fail.")
In a language that is otherwise as simple as it could possibly get away with (no `if`!), `use <-` initially feels like magic and somewhat out of place.
But take look at nested callback code, the pyramid of doom, and you see why it's pragmatically necessary. It's a brilliant design that incorporates just enough metaprogramming magic to make it ergonomic. The LSP even lets you convert back and forth between nested callback style and `use`, so you can strip away the magic in one code action if you need to unravel it.
LLMs tend to rise to the level of the complexity of the codebase. They are probabilistic pattern matching machines, after all. It's rare to have a 15 year old repo without significant complexity; is it possible that the reason LLMs have trouble with complex codebases is that the codebases are complex?
IMO it has nothing to do with LLMs. They just mirror the patterns they see - don't get upset when you don't like your own reflection! Software complexity is still bad. LLMs just shove it back in our face.
Implications: AI is always going to feel more effective on brand new codebases without any legacy weight. And less effective on "real" apps where the details matter.
The bias is strongly evident - you rarely hear anyone talking about how they vibe coded a coherent changeset to an existing repo.
I've never thought of open source as something you can make money on directly. It's hard to see how it benefits an IC economically, besides getting some recognition and a sense of pride.
Open source has always felt explicitly like a benefit for companies.
- They get free code, buy vs build is irrelevant when you can just pip install.
- Systems become largely homogenized, thus contributors are replaceable.
- They get an established pool of workers who know the technology already, no training required.
- They get free labor from contributors outside their organization maintaining their dependencies for them in perpetuity.
It's a great deal for employers! Especially if they forbid their employees from contributing back! If you work out the game theory, there's literally no reason for a company to do anything but sit back and siphon the benefits for themselves.
This doesn't really change with LLMs, it just makes the end game much more explicit. The goal was always to capture the intellectual output of open source contributors for private profit. Always. Now that it's actually happening, who's really shocked?
To add: This is okay in early organizations. In larger organizations, the risk of malicious code entering your systems is much greater. So I think FOSS benefits small companies more than large companies, which seems good.
This. Mammals are (generally) K-selected species, meaning they invest heavily in raising their young. In the absence of natural pressures, mammals reproduce like crazy until they bump up against the environment's carrying capacity. Humans are not unique at all in our tendency to expand! It's just that we have opposable thumbs and language and tools to help us boost the carrying capacity.
reply