About my not having a software background, I started this as I've been a network/security/systems engineer/architect/consultant for 25 years, but never dev work. I can read and follow code well enough to debug things, but I've never had the knack to learn languages and write my own. Never really had to, but wanted to.
This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.
I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.
Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.
I started using claude-code, but found it pretty useless without any ability to talk to other chats. Claude recommended I make my own MCP server, so I did. I built a wrapper script to invoke anthropic's sandbox-runtime toolkit to invoke claude-code in a project with tmux, and my mcp server allows desktop to talk to tmux. Later I built in my own filesystem tools, and now it just spawns konsole sessions for itself invoking workers to read tasks it drops into my filesystem, points claude-code to it, and runs until it commits code, and then I have the PM in desktop verify it, do the final push/pr/merge. I use an approval system in a gui to tell me when claude is trying to use something, and I set an approve for period to let it do it's thang.
Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.
At least I was until Claude started crapping the bed lately.
My normal day job is IT consulting, network/security mostly, so I'm using it largely to connect to my workers, sandboxed or not, to make me scripts to do things, modify configurations, and I built out an ansible/terraform integration in my mcp to be able to start doing direct automation tasking them directly via it as well.
The whole thing I needed was to let AI reach out and touch things, be my hands essentially. This is why I built my tmux/worker system, I built out an xdg-portal integration to let it screen shot and soon interact with my desktop as a poc.
I could let it just start logging into devices and letting them modify configs, but it's pretty dumb about stuff like modifying fortigate configurations at times what it thinks it should do vs what the cli actually let's it do, so I have to proof much of it, but that's why I'm building it to be able to run ansible/terraform jobs instead using frameworks that are provided by the vendors for direct configurations to allow for atomic config changes as much as vendor implementations allow for.
My use is considerably simpler than GP's but I use it anytime I get bogged down in the details and lose my way, just have Claude handle that bit of code and move on. Also good for any block of code that breaks often as the program evolves, Claude has much better foresight than I do so I replace that code with a prompt.
I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.
Lovely! Edit: you might want to consider adding a limiter to the output, it makes a lovely crackling effect but it's easy to flatline the output until it fizzles out completely, it'd be more curious to hear the denser textures.
I added a limiter to the output and an envelope to my synth object, but after a bunch of experimenting with different settings for each, the flatlining effect was still there. Throttling beyond the note+release's duration did fix it, but that also removed the layering effect of overlapping chords. I think some of those were interesting and wanted to keep them. The last thing I tried was setting the attack to 0.01 and I think that fixed the flatlining issue?
> My wife double checked because she still "doesn't trust AI", but all her verification almost 100% matched Claude's conclusions
She's right not to trust it for something like this. The "almost 100%" is the problem (also consider that you're sending personal data to anthropic without permission) especially for something like this where it might mean discarding someone's resume, which is something that could have a significant impact on a person's life.
This is a pretty wild claim, so I think it is fair to be critical of the examples given:
- Driftless sounds like it might be better as a claude code skill or hook
- Deploycast is an LLM summarization service
- Triage also seems like it might be more effective inside CC as a skill or hook
In other words all these projects are tooling around LLM API calls.
> What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.
That isn't going to go away. Here's another idea: a discussion tool for audio workflows. Pre-LLMs the difficult part of something like this was never code generation.
You really know what a good interface should be like, this is really inspiring. So is the design of everything I've seen on your website!
I won't pile on to what everyone else has said about the book connections / AI part of this (though I agree that part is not the really interesting or useful thing about your project) but I think a walk-through of how you approach UI design would be very interesting!
Same thing happens to me in long enough sessions in xterm. Anecdotally it's pretty much guaranteed if I continue a session close to the point of context compacting, or if the context suddenly expands with some tool call.
Edit: for a while I thought this was by design since it was a very visceral / graphical way to feel that you're hitting the edge of context and should probably end the session.
If I get to the flicker point I generally start a new session. The flicker point always happens though from what I have observed.
> if you do like to discover new music, self-hosting just isn't an option
Sure it is. Music discovery via algorithmic services is not the only way. There's radio, talking to people who have similar interests, reading interviews with musicians who talk about other music they like, browsing selections at the library, reading books about music or musicians, even just reading the liner notes for an album, noticing some players you like and finding other things they've worked on, and on and on and on. It doesn't have to be high effort, it's not instant, but it works great.
What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.
reply