> With multiple git ops pipelines however, they start to get in the way of progress - especially when they need to be joined in series
Definitely, that's why systems like Zuul exist.
They're esoteric and require a lot of engineering discipline in patience- but in my experience most people who reach for gitops aren't doing it for a sense of "everything as code" (for the audibility and theoretical reproducibility of it) it's because they think it will allow them to go faster; and a tool like Zuul is hard to learn and will intentionally slow you down.
Would you mind elaborating on this more, describing the differences and how tools like Zuul introduce degrees of friction that result in smooth operation and pipelines?
I know my phrasing may come off wrong, I apologize for that. But I'm asking genuinely; I've only ever seen Zuul in the wild in the Red Hat and OpenStack ecosystems.
Right, so Zuul is properly interesting if you're dealing with multi-repo setups and want to test changes across them before they merge; that's the key bit that something like GitLab CI doesn't really do.
The main thing with Zuul is speculative execution. Say you've got a queue of patches waiting to merge across different repos. Zuul will optimistically test each patch as if all the patches ahead of it in the queue have already merged.
So if you've got patches A, B, and C queued up, Zuul tests:
* A on its own
* B with A already applied
* C with both A and B applied
If something fails, Zuul rewinds and retests without the failing patch. This means you're not waiting for A to fully merge before you can even start testing B - massive time saver when you've got lots of changes flowing through.
With GitLab CI, you're basically testing each MR in isolation against the current state of the target branch. If you've got interdependent changes across repos, you end up with this annoying pattern:
* Merge change in repo A
* Wait for it to land
* Now test change in repo B that depends on it
* Merge that
* Now test change in repo C...
It's serial and slow, and you find out about problems late. If change C reveals an issue with change A, you've already merged A ages ago.
Zuul also has this concept of cross-repo dependencies built in. You can explicitly say "this patch in repo A depends on that patch in repo B" and Zuul will test them together. GitLab CI can sort of hack this together with trigger pipelines and artifacts, but it's not the same thing... you're still not getting that speculative testing across the dependency tree.
The trade-off is that Zuul is significantly more complex to set up and run. It's designed for the OpenStack-style workflow where you've got dozens of repos and hundreds of patches in flight. For a single repo or even a handful of loosely-coupled repos, GitLab CI (and it's ilk) is probably fine and much simpler. But once you hit that multi-repo, high-velocity scenario, Zuul starts to make proper sense. Yet nobodies using it except hardcore foundational infrastructure providers.
> Right, so Zuul is properly interesting if you're dealing with multi-repo setups and want to test changes across them before they merge; that's the key bit that something like GitLab CI doesn't really do.
I'm not sure about that. Even when we ignore plain old commits pushed by pipeline jobs, GitLab does support multi-project pipelines.
GitLab's multi-project pipelines trigger downstream jobs, but you're still testing each MR against the current merged state of dependencies.
Zuul's whole thing is testing unmerged changes together.
You've got MR A in repo 1, MR B in repo 2 that needs A, and MR C in repo 3 that needs B... all unmerged. Zuul lets you declare these dependencies and tests A+B+C as a unit before anything merges. Plus it speculatively applies queued changes so you're not serialising the whole lot.
GitLab has the mechanism to connect repos, but not the workflow for testing a DAG of unmerged interdependent changes. You'd need to manually coordinate checking out specific MR branches together, which is exactly the faff Zuul sorts out.
> The whole premise of opengitops is heavily reliant on kubernetes.
There's indeed a fair degree of short-sightedness in some GitOps proponents, who conflate their own personal implementation with the one true GitOps.
Back in the real world, the bulk of cloud infrastructure covers resources that go well beyond applying changes to pre-baked Kubernetes cluster. Any service running on the likes of AWS/Google Cloud/Azure/etc require configuring plenty of cloud resources with whatever IaC platform they use, and Kubernetes operators neither cover those nor are a reasonable approach to the problem domain.
> I mean Crossplane is a pretty popular k8s operator that does exactly that, create cloud infrastructure from K8s objects.
If your only tool is a hammer then every problem looks like a nail. It's absurd how anyone would think it's a good idea to implement their IaC infrastructure, the one think you want and need to be bootstrapable, to require a full blown K8s cluster already up-and-running with custom operators perfectly configured and working flawlessly. Madness.
I hope someone somewhere has managed to run a K8s cluster on a bunch of EC2 instances that are themselves described as objects in that K8s cluster. Maybe the VPC is also an object in the cluster.
Learn the lingo, the language, the proper way of posturing and the correct way to shirk responsibility and that's what matters in certain orgs.
I sound really bitter, but I'm not, I'm actually quite good at the game and I've proven that, I just don't really like the game because it doesn't translate into being able to take pride in what I've done. It's all about serving egos. Your own and others.
Every french multinational I've worked for is entirely built on this.
> If the Google culture was at all obsessed about helping users, I wonder why Google UX always sucked so much
Ok, I mean this sincerely.
You must never have used Microsoft tools.
They managed to get their productivity suite into schools 30 years ago to cover UX issues, even now the biggest pain of moving away is the fact that users come out of school trained on it. That also happens to be their best UX.
Azure? Teams? PowerBI? It's a total joke compared to even the most gnarly of google services (or FOSS tools, like Gerrit).
I do agree with you. Teams are a cancer and Azure UI sucks too. I do not use much MS products since essentially Win7 I have mainly used Linux as my work environment. But one thing MS used to be good at at least, was the documentation. If you are that old, you will remember each product came with extensive manuals AND there was an actual customer support. With google its like...not even that.
With continuous delivery and access to preview and beta features, the documentation is fragmented and scattered and half of it technically is for the previous version of the product with a different name but still mostly works because microsoft can't finish modernizing most software...
And the customer support is not great until you start really paying the big bucks for it.
> If you are that old, you will remember each product came with extensive manuals AND there was an actual customer support.
But even then, contemporaries outclassed Microsoft by a lot.
It was culture back then to provide printed user manuals, I still have some from Sun Microsystems because it was the best resource I found to learn how storage appliances should work and the technical trade-offs of them.
Fair enough, everyone delivered software in boxes and with 500 page manuals. I still maintain MS did invest a lot in the quality of their documentation and they cared about developers - otherwise the Petzold series would have never happened (or the MS Press for that matter).
Honestly your entire comment is almost exact polar opposite to how I feel.
GCP Makes total sense if you know anything about systems administration, Google docs is limited for things like custom fonts (IE; not gonna happen) but it's simple at least and I can give people a link to click and it's gonna look the same for them.
But, honestly, the Teams one is baffling. I can't think of a single thing Meet does worse than Teams.
Yeah that seriously whiplashed me too, I'm genuinely confused. Google Meets has always worked completely fine for me, good performance, works well on mobile, Firefox, etc. Nothing special but it works. Probably my favorite of all the meeting apps.
Teams meanwhile is absolutely my least favorite, takes forever to load, won't work in Firefox, nags me to download the app, confusing UI. I don't think I've ever heard anyone say they like teams.
I've used Meet a few times for video calls and I was amazed at how poorly it worked given the amount of resources Google has at their disposal. I've never had a good video call on Meets. I've had a few Meet calls where over time the resolution and bitrate would be reduced to such a low point I couldn't even see the other person at all (just a large blocky mess). Whereas Teams (for all its flaws) normally has no major issues with the video quality. Teams isn't without its flaws and I do occassionally fall back to ZOom for larger group video calls but at the end of the day Teams video calling sort of just works fine. Not great but not terrible either. YMMV of course.
I've had the complete opposite experience. Meet has been rock solid for me whilst Teams has been an absolute nightmare.
The thing is though both Meet and Teams use centralised server architectures (SFUs: Selective Forwarding Units for Google, "Transport Routers" for Teams), so your quality issues likely come down to network routing rather than the platforms themselves. The progressive quality degradation you're describing on Meet sounds like adaptive bitrate doing its job when your connection to Google's servers is struggling.
The reason Teams might work better for you is probably just dumb luck with how your ISP routes to Microsoft's network versus Google's. For me in Sweden, it's the opposite ... Teams routes my media through relays in France, which adds enough latency that people constantly interrupt each other accidentally. It's maddening. Meanwhile, Meet's routing has been flawless.
But even if Teams works for your particular network setup, let's not pretend it's a good piece of software. Teams is an absolute resource hog that treats my CPU like a space heater and my RAM like an all-you-can-eat buffet. The interface is cluttered rubbish, it takes ages to start up, and the only reason anyone tolerates it is because Microsoft bundled it with Office 365.
Your mileage definitely varies... sounds like you've got routing that favours Microsoft's infrastructure. Lucky you, I suppose, but that doesn't make Teams any less dogwater for those of us stuck with their poorly-placed European relays.
How large does the canvas need to get before pagination makes sense?
Modern websites are enormous in terms of how much needs to be loaded into memory- sure, not all of it is part of the rendered document, but is there a limit to the canvas size?
I'm thinking you could probably get 100,000+ entries and still be able to use CTRL+F on the site in a responsive way since even at 100,000+ entries you're still only about 10% of Facebooks "wall" application page. (Without additional "infinite scroll" entries)
I made the jump to Hugo too (from a managed service: svbtle) a long time ago, but I'll be really honest...
I regret it.
I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
I’ve slowly grown to realize there’s some software you just don’t need to update. A static site generator (almost certainly) won’t have security issues as long as you control the input and the output is just a bunch of static files.
Unless the new version of the software includes some feature I need, I can be totally fine just running an old version forever. I could just write down the version of the SSG my site builds with (or commit it to source control) and move on with my life. It’ll work as long as operating systems and CPU architectures/whatever don’t change too much (and in the worst case scenario, I’m sure the tech exists to emulate whatever conditions it needs to run) Some software is already ‘finished’ and there’s no need to update it, ever.
Is there any static site generator where you specify the version you use, and the launcher will simply run the old binary that you want?
Like most build systems work, for example when you set a "rust-version" in Cargo.toml and only bump it when you explicitely want to. This way it will still use the older version on a fresh checkout.
I used Zola for my SSG and can't think of the last breaking change I've hit. I just use the pattern of locked nix devshells for everything by default. The extra tools are used for processing images or cooklang files.
> Is there any static site generator where you specify the version you use, and the launcher will simply run the old binary that you want?
For Hugo, there is Hugo Version Manager (hvm)[0], a project maintained by Hugo contributor Joe Mooring. While the way it works isn't precisely what you described, it may come close enough.
I hate to say it, but even the existence of this tool is a danger sign.
I say this as someone who uses Hugo and is regularly burned (singed) by breaking changes.
Pinning your version is great until you trip across a bug (usually rendering, in my case) and need to upgrade to get rid of it. There goes a few hours. I won’t even mention the horror of needing a test suite to make sure the rendering of your old pages hasn’t changed significantly. (I ended up with large portions of text in a code block, never tracked the root cause down… probably something to do with too much indentation inside a bulleted list. It didn’t render that way several years before, though.)
I guess my very own "niccup" (basically hiccup-in-nix) fits that (https://embedding-shapes.github.io/niccup/), as you'll typically always include the library together with a strictly set version, so even when new versions are available, you'd need to explicitly upgrade if you want it.
> In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
Pretty sure the version of Hugo used to generate a site is included in metadata in the generated output.
If you have a copy of the site from when it last worked, then assuming my above memory is correct you should be able to get the exact version number from that. :)
Oh definitely. How can you suggest adding a binary to a git repository? It's a bad idea on many levels: it bloats the repository by several orders of magnitude, and it locks you to the chosen architecture and OS. Nope, nope, nope.
Second this. Once I setup GitHub actions with Hugo (there’s one readily available), I rarely build the blog locally anymore. New article drafts become GH pull requests, and once ready they get merged and published. This also works on mobile well enough.
If you use an off-the-shelf binary for any tool, you can put the binary in `${project}/bin/`, add it to `.gitignore`, document the download URL in `README.md` or an install script, and commit the checksum in a project-wide `SHA256SUMS` file (or `B3SUMS`, etc.). It's like a lo-fi version of Git LFS.
I had the same issue and I'm currently thinking whether it's easier to just Vibe Engineer my own static site generator with the exact features I need vs fighting with the hugo theme system.
My needs for a site are pretty simple, so I might just go with the custom-built one to be honest.
If it breaks, I can just go look in the mirror for the culprit =)
I had a similar issue, but with Jekyll. I had a customized theme and some update along the way broke everything. So, I very much agree with a sibling comment about not needing to update static site generators and it’s not just a Hugo thing. Sadly, my site was also being hosted/generated by GitHub, so I had no real choice in the update matter. (I’m not sure if pinning would have helped.)
> So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
No need for the entire binary.
Just put `go run github.com/gohugoio/hugo@vX.Y.Z "$@"` into a `hugo.sh` script or similar that's in source control, and then run that script instead of the Hugo binary.
You'll need Go installed, but it's incredibly backwards compatible, so updating to newer Go versions is very unlikely to break running the old Hugo version.
That's somewhat untrue. Personal software only moves to your constraints. Shared software moves to others' as well. I use Mediawiki for my site (I would like others to be able to edit it) and version changes introduce changes in more than the sections I care about.
They tend to change and when I want to do something that the generator does not do, I either need to hack it in (which might break) or i need to fork the generator.
Binary search is a very old trick, going back to 1946 on computers, and probably thousands of years before that, since searching sorted lists goes back to at least ancient Babylon. https://en.wikipedia.org/wiki/Binary_search
I've been burned by this a few times and now I have the Hugo binary in source control. I had to dig through the releases a little bit to find the version that didn't break everything.
> I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
> Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
I've had the same issues as you, and yes, I agree that pinning a version is very important for Hugo.
It's more useful for once-and-done throwaway sites that need some form of structure that a static site generator can provide.
At least it’s practical to identify a specific version to use, and you can be reasonably confident it will work indefinitely. I remember that with the Hyde iteration of my site, somewhere along the way Hyde became impossible to install, and I was stuck with an existing installation, or a lot of effort to put it back together manually. Python packaging has improved a lot since then, so that I doubt that problem would apply on any new project, but it’s still far more plausible than in a language like Go or Rust.
I maintained a personal fork of Zola for my site (and a couple of others), and am content to just identify the Git repository and revision that’s used.
Zola updates broke my site a few times, quite apart from my patches not cleanly rebasing. I kept on the treadmill for a while, initially because of a couple of new features I did want, but then decided it wasn’t necessary. You don’t need to run the latest version; old is fine.
—⁂—
One piece of advice I would give for people updating their SSG: build your site with the old and new versions of the SSG, and diff the directories, to avoid regressions.
If there are dynamic values, normalise both builds before diffing: for example, if you have timestamp-based cachebusting, zero all such timestamps with something like `sed -i 's/\?t=[0-9]+/?t=0/' **/*`. Otherwise regressions may be masked.
I caught breakages a couple of times this way. Once was due to Zola changing how shortcodes or Markdown worked, which I otherwise might not have noticed. (Frankly, Markdown is horrible for things like this, and Zola’s shortcodes badly-designed; but really it’s mostly Markdown’s fault.)
A pretty light-grey comment as I came across it. Maybe I’m missing something odious about it? People downvote this, but as a VERY skeptical AI skeptic, it’s exactly the sort of use case that makes sense to me:
A) Low-stakes application with
B) nearly no attack surface that
C) you don’t use consistently enough to keep in your head, but
D) is simple enough for an experienced software developer to do a quick sanity check on and run it to see if it works.
Hell, do it in a sandbox if you feel better about it.
If it was a Django/Node/rails/Laravel/…Phoenix… (sorry, I’ve been out of my 12+ years web dev career a short 4 years and suddenly realized I can only remember like 4 server-side frameworks/environments now) application, something that would run on other people’s devices, or really anything else that produces an executable output, then yeah fuck that vibe coding bullshit. But unless you’ve got that thing spitting out an SPA for you, then I say go for it.
Yeah I feel like Claude Code is basically tailor made for a use-case like this. Where:
* I have forked some public repository that has kept up with upstream (IE; lots of example code to draw from)
* Upstream is publishing documentation on what's changing
* The errors are somewhat google-able
* Can be done in a VM and thrown away
* Limited attack surface anyway.
I think you're downvoted because the comment comes across as glib and handwavy (or not moving the discussion forward.. maybe?), and if it was a year ago I would probably argue against it.. but I think Claude Code can definitely help with this.
It just didn't exist as it does in 2023~ or whenever it was that I originally started having issues.
---
That said: it shouldn't be necessary. As others in this thread have articulated (well, imo) sometimes software is "done" and Hugo could be "done" software, except it's not; so the onus is on the operator to pin their definition of "done" version.. which is not what you'd expect.
What kind of issues? I use my own private theme called Brahma which I wrote from scratch. I keep it simple and has been since 2019. I have barely had any issues.
Given, mine is not sophisticated at all and simple by design. But curious what kind of issues pops up.
Nobody can point to a reason why it's a good idea for a site with any interactivity now.
All the supporters here are all the same: "I had to do a whole bunch of mental gymnastics and compromises to get <basic server side site feature> but it's worth it!" But they don't say why it was worth it, beyond "it's easy now <after lots of work costs sunk>".
When you try get to why they did it in the first place, it's universally some variation on "I got fed up with <some large server side package> so took the nuclear SSG route <and then had to eventually rewrite or get someone else's servers involved again>"
Part of this is a me problem: a personal website should be owned by the person, IMO. A lot of people are fine to let other people own parts of their personal websites, and SSGs encourage that. What even is a personal website if it's a theme that looks like someone else's, hosted and owned on someone else's server - why not just use Facebook at that point?!
I was nodding along until your last paragraph - SSGs encourage letting other people own parts of your personal site, really? Sure, people bolt on Disqus or something, but otherwise I am not sure I follow the argument. Isn't part of the appeal of SSGs that all you have is a bunch of html/css/js that you can drop on any server anywhere (even a solar-powered RPi can serve a lot of requests[1])?
> Isn't part of the appeal of SSGs that all you have is a bunch of html/css/js that you can drop on any server anywhere (even a solar-powered RPi can serve a lot of requests[1])?
This is the part I'm struggling with. That's the view I held from 2016 - 2024. Practically though, it's only true if you want a leaflet website with 0 interactivity.
If you want _any_ interactivity at all (like, _any_ written data of any kind, even server or visitor logs) then you need a server or a 3rd party.
This means for 99% of personal websites with an SSG, you need a real server or a 3rd party service.
When SSGs first came around (2010 - 2015) compute was getting expensive, server sides were getting big and complex, bot traffic solutions were lame, and all the big tech companies started offering free static hosting because it was an easy free thing to offer.
Compare this to now, 2026, it's apparently nothing special to handle hackernews front page on free or cheap compute. Things like Deno, Bun, even Go and Python make writing quick, small, modern server sides so much quicker, easier and safer. Cloudflare and or crowdsec can cover 99% of bot and traffic issues. It's possible to get multiple free multiple GB compute instances now.
I didn't mean to imply there's some sinister plot of people maliciously encouraging people to use SSGs to steal their stuff, but that's the reality that modern personal webdev has sleepwalked into. SSGs were first sold to make things better performing and easier than things were at the time. Pretty much any "server anywhere" you own now will be able to run a handwritten server doing SSR markdown -> HTML now.
So why force yourself to have to start entertaining ideas like making your visitors download multiple megabyte client side index files to implement search, or embedded iframes and massive JS external libraries for things like comment sections? Easier looking SSG patterns like that typically break all the stuff required to keep the web open and equal, like screen readers, low bandwidth connections and privacy. (Obviously SSR doesn't implicity solve these, but many of these things were originally conceived with SSR in mind and so are naturally more compatible).
Ask anyone who's been in and out of web dev for more than 15 years to really critically think about SSGs in depth, and I think they'll conclude they offer a complete solution for maybe 1% of websites, but seem to be recommended in 99% of places as the only worthy way to do websites now. But when you pick it apart and try it, you end up in Jeff's position: statically rendered pages (the easy bit) and a TODO with a list of compromising options for basic interactivity. In 5 years time, he'll have complex SSG pipelines that's running almost 24/7, or a complex mesh of dependencies on external services that are constantly changing or trying to start charging him more to deal with his own creations.
I can try that version, but it's entirely possible (and even: likely) that I was already using an old version of Hugo then; whatever was installed by my package manager - assuming I updated my machine somewhat recently.
If I used MacOS then Hugo was probably very old, since I often forget to update brew packages and end up running very old software.
But, that's what I thought to do first also.
In the end, it becomes not worth the hassle, and spending time fixing it means that whatever I was going to write gets pushed out of my head, and it's very difficult to even bother.
It may be worth considering whether you need a native binary (and the ability to run it) for the job at all. A static site generator doesn't need to do anything that browsers from the last 10 years can't do; a static site generator is fundamentally a classic batch processing job that takes a collection of (mostly plain text) files as input, processes it, and then outputs something else (in this case, a collection of post-processed content for the site).
If you encode the transformations that your desired SSG should perform by writing the processing rules as plain text source code that a browser is capable of executing (i.e., an "HTML tool" or something adjacent[1][2]), then you can just publish this "static site generator" itself as yet another page on your static site.
To spell it out: running the static site generator to create a new post* doesn't need to involve anything more than hitting /new.html (or whatever) on the live site, clicking the button for the type=file input on that page, using the browser file picker to open the directory where the source to your static site lives, and then saving the resulting ZIP somewhere so the contents can be copied to whatever host you're using.
I have an auto-whitelist if my greylisting has been handled properly, which means that, the first signup email is indeed invalid, but the second works.
On rare occasions I get frustrated by this, and I'm forced to login via ssh and manually permit a greylisted address through - though normally I am not so time sensitive. My greylisting is only 5 minutes.
Yes, its a bit hidden away now- has worsening UX over time and it will prompt you to buy Apple Music’s subscription service: but you can still buy songs via iTunes on iPhone.
This is how I get most of my music, then I copy the songs to my NAS to play on Linux.
Less convenient for sure, and you have to take the backups[0] yourself.
I think the western world very much revolves around:
* The internet
* Linux servers
* Automation
I get your point, but it falls on deaf ears to me since most people don’t feel the benefits until some passionate nerd makes something that scratches an itch.
For a practical example: peer-to-peer sharing like Airdrop is much easier to implement in a world with ipv6.
DX12 is less and less the default, most gamedev that I’ve seen is surrounding Vulkan now.
DX12 worked decently better than openGL before, and all the gamedevs had windows, and it was required for xbox… but now those things are less and less true.
The playstation was always “odd-man-out” when it came to graphics processing, and we used a lot of shims, but then Stadia came along and was a proper linux, so we rewrote a huge amount of our render to be better behaved for Vulkan.
All subsequent games on that engine have thus had a vulkan friendly renderer by default, that is implemented cleaner than the DX12 one, and works natively pretty much everywhere. So its the new default.
Definitely, that's why systems like Zuul exist.
They're esoteric and require a lot of engineering discipline in patience- but in my experience most people who reach for gitops aren't doing it for a sense of "everything as code" (for the audibility and theoretical reproducibility of it) it's because they think it will allow them to go faster; and a tool like Zuul is hard to learn and will intentionally slow you down.
Because slow is smooth, and smooth is fast.
reply