I know a guy who retired from the Air Force and got 100% disability that included tinnitus, ptsd, and something about his joints. This person was an aircraft maintainer and never saw combat, although he was deployed a few times. The lady evaluating his case really hooked him up, he brags bout it all the time. He gets retirement and disability.
There are reddits, discords, and even companies that assist vets in working the system. many of whom never got close to deployment and were never combat arms. If you're persistent you'll get paid. As a combat vet it makes me sick.
Good friend and former colleague has 100% disability and coarsely brags about it.
He has no combat deployments. He has a home gym, rolls BJJ 6 days a week. Has a government (tax payer) paid Bachelor’s and Master’s in Comp. Sci. and makes 6-figures working as a civilian DOD employee.
So I’m not sure in what meaningful sense of the term he’s “100% disabled” but he’s enjoying his salary so good for him?
Both this and the earlier post emphasize the lack of combat deployments in the examples. I should think disability would cover any service-related injury.
It does, I’m just emphasizing the lack of material injury. Spending 25 years in the military in an administrative office role and going “my hearing is less good, I have carpal tunnel, I have sleep problems” now give me $4,000 seems rather off when you’re otherwise a completely healthy normal human being.
After all, it’s not as if normal people in normal society lack these conditions as they age. Connecting them to the service is spurious and often fraudulent. By all means, let’s take care of the folks with serious physical and mental injury that cannot provide for themselves, but let’s be real our system is heavily gamed and abused.
I am not sure its "not allowed". The abstract is interesting and thought provoking.
I would love to read the book but I personally don't have time for it - so most likely would not pay for it.
There is a danger in thinking of our "meat machines" in purely mechanical terms - so my first interest is whether whatever model being proposed can actually be adhered to. Or maybe an "AI Copilot" can implement such a framework and assist us mere humans in attaining our goals.
You hit the exact tension I struggled with while writing this.
To give a bit of context: growing up in Poland, I found that without formalizing my goals, I was paralyzed. I literally couldn't "think" clearly about my future because the variables were too undefined. I wrote this book primarily as my own "antifragility toolbox"—using the language I speak best (math and systems) to debug my own life constraints.
Re: The AI Copilot — that is exactly the dream. A dashboard that monitors inputs/outputs and warns: "Variance Instability Detected" before the biological system actually crashes. I am actually prototyping a small Python script for this right now. If it works, I'll post it here.
Fair point, and I appreciate the check.
My intent wasn't to treat this as a pure sales channel, but to get feedback on the framework itself. I'm trying to map Control Theory to psychology (specifically Ashby's Law), and I know this community is the best place to find holes in that kind of logic.
That is why I made the first chapter (which defines the core Topology and math) free/open without an email gate. I am genuinely more interested in the critique of the model than the sales.
Assembly was a "high level" language when it was new -- it was far more abstract than entering in raw bytes. C was considered high level later on too, even though these days it is seen as "low level" -- everything is relative to what else is out there.
The same pattern held through the early days of "high level" languages that were compiled to assembly, and then the early days of higher level languages that were interpreted.
Read the famous "Story of Mel" [1] about Mel Kaye, who refused to use optimizing assemblers in the late 1950s because "you never know where they are going to put things". Even in the 1980s you used to find people like that.
I don't think that does count against the narrative? The narrative is just that each time we've moved up the abstraction chain in generating code, there have been people who have been skeptical of the new level of abstraction. I would say that it's usually the case that highly skilled operators at the previous level remain more effective than the new adopters of the next level. But what ends up mattering more in the long run is that the higher level of abstraction enables a lot more people to get started and reach a basic level of capability. This is exactly what's happening now! Lots of experienced programmers are not embracing these tools, or are, but are still more effective just writing code. But way more people can get into "vibe coding" with some basic level of success, and that opens up new possibilities.
The narrative is that non-LLM adopters will be left behind, lose their jobs, are Luddites, yadda yadda yadda because they are not moving up the abstraction layers by adopting LLMs to improve their output. There is no point in the timeframe of the story at which Mel would have benefitted from a move to a higher abstraction level by adopting the optimizing compiler because its output will always be drastically inferior to his own using his native expertise.
That's not the narrative in this thread. That's a broader narrative than the one in this thread.
And yes, as I said, the point is not that Mel would benefit, it's that each time a new higher level of abstraction comes onto the scene, it is accessible to more people than the previous level. This was the pattern with machine code to symbolic assembly, it was the pattern with assembly to compiled languages, with higher level languages, and now with "prompting".
The comment I originally replied to implied that this current new abstraction layer is totally different than all the previous ones, and all I said is that I don't think so, I think the comparison is indeed apt. Part of that pattern is that a lot of new people can adopt this new layer of abstraction, even while many people who already know how to program are likely to remain more effective without it.
Overall Codebase size vs context matter less when you set it up as microservices style architecture from the starts.
I just split it into boundaries that make sense to me. Get the LLM to make a quick cheat sheet about the api and then feed that into adjacent modules. It doesn’t need to know everything about all of it to make changes if you’ve got a grip on big picture and the boundaries are somewhat sane
Except it doesn't work the same way it won't work for LLMs.
If you use too many microserviced, you will get global state, race conditions, much more complex failure models again and no human/LLM can effectively reason about those. We somewhat have tools to do that in case of monoliths, but if one gets to this point with microservices, it's game over.
I work with multiple monoliths that span anywhere from 100k to 500k lines of code, in a non-mainstream language (Elixir). Opus 4.5 crushes everything I throw at it: complex bugs, extending existing features, adding new features in a way that matches conventions, refactors, migrations... The only time it struggles is if my instructions are unclear or incomplete. For example if I ask it to fix a bug but don't specify that such-and-such should continue to work the way it does due to an undocumented business requirement, Opus might mess that up. But I consider that normal because a human developer would also do fail at it.
I once had an apartment in an old building. The building had high ceilings and equally high wood frame windows. The windows were drafty and had visible gaps to the outside. As winter approached, and nights grew colder I set out to cover the windows with plastic film (common here for this purpose).
While preparing one window in the bedroom I discovered a silken patch like a miniature of the one depicted in this article. I used my cleaning rag to wipe it away thinking any inhabitants had long since moved on. To my surprise a wisp tiny spiders scurried away from my swipe, disappearing into crevices, the base board, and carpet. Startled and not seeing any to kill, I bid them farewell, in my mind assuring myself they had moved on.
That same day or the next a cold wave came through and I lied awake in bed listening to the plastic I had applied rustle from the wind. The window gaps were bigger than I’d thought. Falling into a fitful sleep under not quite adequate blankets, I suddenly felt a sharp pain in my lower leg! I jumped out of bed, turned a light, and found upon examination three red punctures on my left calf. Recalling the spiders from the day before I shook out my blankets and bed sheets. I checked below the bed. Nothing, I never found the culprits.
After sleeping that night on the couch, I awoke late the next. I felt feverish and disoriented. The wounds on my calf had become inflamed. The cold in the apartment added to my discomfort.
The next few days were a blur. I missed work and the few social engagements I had planned. Eventually the wounds began to heal but I was still bone cold and the light from windows hurt my eyes. Winter has set in and the plastic I’d applied to the windows had detached from the wind allowing icy drafts into the apartment. I diligently applied another layer of plastic on the windows, this time using packing tape to secure the corners!
It was a harsh winter and I repeated this process several times until the windows were opaque and along with the shades allowing very little light through.
One day as I sat in the dark slowly eating my meal there was a knock on the door. It was my close friend from work wondering what had happened to me. I must have been a sight judging from his startled appearance.
Summer came and I emerged occasionally to acquire food and other necessities only to scurry back home when the outside became too overwhelming. I eventually found remote work, and here I am today in my cold dark apartment with high ceilings and drafty windows.
Note if you made it to the end, thanks for indulging me. This is based on a real apartment, windows and spiders!
At some point the llm ingested a few open source NES emulators and many articles on their architecture. So i question the llm creativity involved with these types examples. Probably also for dsps.
Right, the amount of hallucinated response data I see at work using any of these leading models is pretty staggering. So anytime I see one of these “AI created a 100% faithful ___” type posts that does not have detailed testing information, I laugh. Without that, this is v0 and only about 5% of the effort.
> i question the llm creativity involved with these types examples.
Indeed but to be fair I'm not sure anybody claimed much "creativity" only that it worked... but that itself is still problematic. What does it mean to claim it even manage to implement an alternative if we don't have an easy way to verify?
This where garbage collected languages shine.
reply