(Dont take this as an attack on you personally, just on the sentiment you are giving in your comment)
The attitude you present here has become my litmus test for who has actually given the agents a thorough shake rather than just a cursory glance. These agents are tools, not magic (even though they appear to be magic when they are really humming). They require operator skill to corral them. They need tweaking and iteration, often from people already skilled in the kinds of software they are trying to write. Its only then that you get the true potential, and its only then you realize just how much more productive you can be _because of them_, not _in spite of them_. The tools are imperfect, and there are a lot of rough edges that a skilled operator can sand down to become huge boons for themselves rather than getting cut and saying "the tools suck".
Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.
> Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.
Lmao, and just as with google, they’ll eventually optimize it for ad delivery and it will become shit.
So now you need to test them regularly. And order new ones when they're not holding a charge any more. Then power down the server, unplug it, pull the UPS out, swap batteries, etc.
Then even when I think I've got the UPS automatic shutdown scripts and drivers finally working just right under linux, a routine version upgrade breaks it all for some reason and I'm spending another 30 minutes reading through obscure docs and running tests until it works again.
Not sure what to say then. I run nixos on ~15 different VMs / minipcs, a total of I guess 6 physical machines. Never had to deal with a UPS battery dying, and havent had to do anything to address NUT breaking. I broadcast NUT via synology NAS though, so the only direct client of the UPS status is the NAS. Ive never once had an issue in the ~5 years Ive had it setup like this.
My home server doesn't need to be high availability, and the BIOS is set to whatever state prior to power loss. I don't have a UPS. However, we were recently hit with a telco outage while visiting family out of town. As far as I can tell there wasn't a power outage, but it took a hard reboot of the modem to get connectivity back. Frustrating because it meant no checking home automation/security and of course no access to the servers. I'm not at a point where my homelab is important enough that I would invest in a redundant WAN though.
I've also worked in environments where the most pragmatic solution was to issue a reboot periodically and accept the minute or two of (external) downtime. Our problem is probably down to T-Mobile's lousy consumer hardware.
As another commenter said (but got downvoted to oblivion for some reason), its not really about uptime for the homelab, its about graceful shutdown/restart. And theres well defined protocols for it (look up network ups tools, aka NUT).
> its not really about uptime for the homelab, its about graceful shutdown/restart.
These are different requirements. The issue I described was not a power outage and having a well managed UPS wouldn't have made a difference. Nothing shut down, but we lost 5G in the area and T-Mobile's modem is janky. My point is that it's another edge case that you need to consider when self hosting, because all the remote management and PDUs in the world can't save you if you can't log into the system.
Of course there's all you need is a smart plug and a script/Home Assistant routine which pings every now and again. There are enterprise versions of this, but simple and cheap works for me.
Power outages here tend to last an hour or more. A UPS doesn't last forever, and depending on how much home compute you have, might not last long enough for anything more than a brief outage. A UPS doesn't magically solve things. Maybe you need a home generator to handle extended outages...
How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.
Fairly frequently, 6kVA UPSs come up for sale locally to me, for dirt cheap (<$400). Yes, they're used, and yes, they'll need ~$500 worth of batteries immediately, but they will run a "normal" homelab for multiple hours. Mine will keep my 2.5kW rack running for at least 15 minutes - if your load is more like 250W (much more "normal" imo) that'll translate to around 2 hours of runtime.
Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.
Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.
> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."
This is a great example of how the homelab bottomless pit becomes normalized.
Rewiring the house for 240V supply and spending $400+500 to refurbish a second-hand UPS to keep the 2500W rack running for 15 minutes?
And then there's the electricity costs of running a 2.5kW load, and then cooling costs associated with getting that much heat out of the house constantly. That's like a space heater and a half running constantly.
Late reply I know, but I wanted to clear up that I don’t want to normalize a 2.5kW homelab. Usually when talking to people about it I refer to it as “insane.” But, having an absolutely insane amount of computer and RAM is fun (and I personally find it genuinely useful for learning, in particular in terms of engineering for massive concurrency) and I can afford the hydro, so whatever. To match the raw compute and RAM with current gen hardware, you only need maybe 500W - you’ll just be spending a shitload of money up front, instead of over time on hydro. (To match my current lab’s utilized performance, I’d need at least 2 servers, one of which with a ~threadripper 7955WX and 256GB of DDR5, and another with an Epyc 9475F and 1TB of DDR5. That would put me somewhere in the neighborhood of $35k? Ish? Costs me about $115/month to run the rack right now (cheaper than my hot tub) and cooling is free in the winter (6~7 months of the year) so the break even is loooooong term. And realistically, $100ish a month isn’t crazy, considering I self host basically everything - the only services I pay for are my VPS to run my mail server, and AWS for glacier S3 for backup-of-last-resort.
Again, not trying to normalize 2500W, most people don’t need that (and I don’t really either), but I do make good use of it.
As for “rewiring the house for 240V”, every house* in Canada and the US is delivered “split-phase” 240V (i.e. 240V with a centre tapped neutral, providing 120V between either end of the 240V phase and neutral or 240V from phase to phase), and many appliances are 240V (dryers, water heaters, stove/ranges/ovens, air conditioners). If you have a space free in your breaker panel, adding a 240V 30A circuit should cost less than $1k if you pay an electrician, and can be DIY’d for like $150 max unless you have an ancient panel that requires rare/specialty breakers or the run is very long. It’s far from the most expensive part of a homelab unless you’re running literally just a raspberry pi or something.
*barring an incredibly small exceptional percentage
I agree with you. My use case doesn't call for perfect uptime. Sounds like yours doesn't either (though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)).
But that doesn't mean its for us to say that someone else's use case is wrong. Some people self host a nextcloud instance and offer access to it to friends and family. What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
My point was simply that different people have different use cases and different needs, and it definitely can become a bottomless pit if you let it.
For me, IPMI, PiKVM, TinyPilot, any sort of remote management interface that can power on/off a device and be auto powered on when power is available, so you can reasonably always access it, and having THAT on the UPS means that you can power down the compute remotely, and also power back up remotely. Means you never have to send someone to reboot your rack while youre out of town, you dont shred your UPS battery in minutes by having the server auto boot when power is available. Eliminates reliance on other people while youre not home :tada:
But again, not quite a bottomless pit, but there are constant layers of complexity if you want to get it right.
> though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)
Generator was a requirement for the sump pump. My house was basically built on a swamp, so an hour in spring without it means water in the basement. Now admittedly, I spent an extra couple hundred bucks to get a 240V generator with higher capacity than strictly necessary, but it was also roughly the minimum amount of money to spend to get one that can run on gasoline or propane, which was a requirement for me. 240V to the rack cost me $45, most of that cost being the breaker (rack is right next to the panel).
> What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
I host roughly a dozen services that have around 25 users at the moment, but I charge $0 for them. I make it very clear: I have a petabyte of storage and oodles of compute, feel free to use your slice, and I’ll do my best to keep everything up and available - for my own sake (and I’ve maintained over 3 nines for 8 years!). But you as a user get no guarantee of uptime or availability, ever, and while I try very hard to backup important data (onsite, offsite split to multiple locations, and AWS S3 glacier), if I lose your data, sucks to suck. So far most people are pretty happy with this arrangement.
I couldn’t possibly fathom worrying about other people’s access to my homelab during a power outage. If I wanted to care, I’d charge for access, and I’d have a standby generator, multiple WANs, a more resilient remote KVM setup, etc. But then I’d be running a business - just a really shitty one that takes tons of my time and makes me little money. And is very illegal (for some of the services I make available, at least), instead of only slightly illegal.
The problem is that final decisions tend to be made in the last 30 seconds of a meeting. If you're a manager with a stake in the outcome, you can't leave the meeting until you've ensured that the outcome works for you. Leaving 5 min early is often simply not an option. While arriving 5 minutes late is. It's not an ego thing -- it's the fact that meeting leaders often let meetings run long.
Exactly my opinion. Im pretty pragmatic and open minded, though seasoned enough that I dont stay on the bleeding edge. I became a convert in October, and I think the most recent Sonnet/Opus models truly changed the calculus of "viable/useable" so that we have now crossed into the age of AI.
We are going to see the rest of the industry come along kicking and screaming over the next calendar year, and thats when the ball is going to start truly rolling.
Yes, anthropics product design is truly bad, as is their product strategy (hey, I know you just subscribed to Claude, but that isnt Claude Code which you need to separately subscribe to, but you get access to Claude Code if you subscribe to a certain tier of Claude, but not the other way around. Also, you need to subscribe to Claude Code with api key and not usage based pricing or else you cant use Claude Code in certain ways. And I know you have Claude and Claude Code access, but actually you cant use Claude Code in Claude, sorry)
I think that's comparing something different. I've seen the one-day vibe code UI tool things which are neat, but it feels like people miss the part that: if it's that easy now, it's not as valuable as it was in the past.
If you can sell it in the meantime, go for it and good for you, but it doesn't feel like that business model will stay around if anyone can prompt it themselves.
I think the other issue is that the leading toolchain to get real work done (claude code) is also lacking multi modality generation, specifically imagegen. This makes design work more nuanced/technical. And in general, theres a lot of end-product UI/UX issues that generally require the operator to know their way around products. So while we are truly in a boom of really useful personalized software toolchains (and a new TUI product comes out every day), it will take a while for truly polished B2C products to ramp up. I guarantee 2026 sees a surge.
Yes, my biggest current gripe is that infuse is a much better client than the first-party app. Otherwise, I'm very happy with it even if it lacks some polish of Plex.
Yeah thats exactly why Im on it. The frontend is fine, maybe a wash compared to Swiftfin last time I tried it out. But for my library, I had frequent issues with codec support on native client vs 0 times on Infuse.
Transcoding generally isnt about raw power and is really just a function of having hardware transcoding support. Minipcs with 'recentish' intel chips handle it with ease for a couple hundred dollars all in (pre-DRAM price increases at least)
Yeah, it's on me for reusing an industrial Mini-ITX motherboard I had leftover. The i5-4570 / 16 GB DDR3 is no slouch and is perfectly adequate for everything else this machine needs to do (download torrents, serve media, handle some backups, run a few minecraft servers), but I'm a generation or two too early for the right transcoding support, and I can't even patch over it with the PCIe slot as I'm using that to give this machine dual NVMe drives.
Given the state of RAM pricing, it's probably cheaper at this point to just buy an Apple TV or the like to put on the TV end. Eventually the NAS can go to an AM4 build when I upgrade my workstation to AM5 and the CPU and RAM from that are freed up.
The attitude you present here has become my litmus test for who has actually given the agents a thorough shake rather than just a cursory glance. These agents are tools, not magic (even though they appear to be magic when they are really humming). They require operator skill to corral them. They need tweaking and iteration, often from people already skilled in the kinds of software they are trying to write. Its only then that you get the true potential, and its only then you realize just how much more productive you can be _because of them_, not _in spite of them_. The tools are imperfect, and there are a lot of rough edges that a skilled operator can sand down to become huge boons for themselves rather than getting cut and saying "the tools suck".
Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.
reply