Most expensive pint I've paid round here was £6, so pubs are about 2x that - about half hour of adult minimum wage, same as spoons charged 25 years ago.
So how do spoons make a profit?
The main difference that I see is that they buy cheap properties and thus don't have crushing rents.
What this page doesn't show is the increase in rent for these buildings.
One thing I've heard is that they have consistent high throughput so they will buy beer that's closer to expiry and hence cheaper, because they know people will drink it before it goes off.
Dunno how much of an effect that is, it can only account for so much.
Think about the price of a keg of beer - much cheaper/pint than buying beer at a pub or from anywhere else in a smaller size. Very high-volume customers have contracts with distributors that can get them even better deals, sometimes significantly better.
Alcohol is pretty much always sold at a huge markup though - 4-5x is standard in the US. UK regulation might be different, but it's likely that the majority of costs in the pub business are in insurance and licensing rather than alcohol and rent.
To be fair actually £6 a pint does sound more like it, I think I'm getting confused with rounds (so I most often spend £10-£12, but I'm buying two pints)
Due to the history of the internet, anything ".com" should be assumed to be US-specific if not obviously global, just like anything ".co.uk" should be assumed to be UK-specific if not obviously global.
If you use a .com for something that is specific to a country/region that is not the US, the onus is on you to clarify. That's the problem here. If you're not going to make it ".uk", then you should be making that obvious on the homepage.
Due to the history of the internet, anything ".com" should be assumed to be a commercial entity.
If you are from the US, the only nation who doesn't frequently use a national TLD, the onus is on you to judge if a site is commercial, US-specific, global, or something else entirely.
I mean... I don't disagree that there is an onus on any website to make it clear who it's audience is. But .com hasn't been exclusively US centric for literally decades. Even during peak 90s domain name territorialism .com meant "commercial".
People outside the USA, i.e. the majority of the world, often experience the opposite to what you've described: the tiresome implicit assumption that everything on the internet is US-related by default. It's not.
I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?
NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.
This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them
All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.
At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.
And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.
"Just replace management and execs with AI" is an elaborate wagie cope. "Management and execs" are quite resistant to today's AI automation - and mostly for technical reasons.
The main reason being: even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks - which are exactly the kind of tasks the management has to handle. See: "AI plays Pokemon", AccountingBench, Vending-Bench and its "real life" test runs, etc.
The performance at long-horizon tasks keeps going up, mind - "you're just training them wrong" is in full force. But that doesn't change that the systems available today aren't there yet. They don't have the executive function to be execs.
> even SOTA AIs of today are subhuman at highly agentic tasks and long-horizon tasks
This sounds like a lot of the work engineers do as well, we're not perfect at it (though execs aren't either), but the work you produce is expected to survive long term, thats why we spend time accounting for edge cases and so on.
Case in point; the popularity of docker/containerization. "It works on my machine" is generally fine in the short term, you can replicate the conditions of the local machine relatively easily, but doing that again and again becomes a problem, so we prepare for that (a long-horizon task) by using containers.
Some management would be cut off when the time comes, Execs on the other hand are not there for work and are in due to personal relationships, so impossible to fire. If you think someone like lets say Satya Nadella can't be replaced by a bot which takes different input streams and then makes decisions, then you are joking. Even his recent end of 2025 letter was mostly written by AI.
If an AI exec reliably outperformed meatbag execs while demanding less $$$, many boards would consider that an upgrade. Why gamble on getting a rare high performance super-CEO when you can get a reliable "good enough"?
The problem is: we don't have an AI exec that would outperform a meatbag exec on average, let alone reliably. Yet.
Yeah. Obviously. Duh. That's why we keep doing it.
Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.
If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?
For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.
So could the company hiring you to do that work fire you and just use Opus instead? If no, then you cannot compare an engineers salary to what Opus costs, because the engineer is needed anyway.
> And API costs at all major providers represent selling the model with a good profit margin.
Though we don't know for certain, this is likely false. At best, it's looking like break even, but if you look at Anthropic, they cap their API spend at just $5,000 a month, which sounds like a stop loss. If it were making a good profit, they'd have no reason to have a stop loss (and certainly not that low).
> Yeah. Obviously. Duh. That's why we keep doing it.
I don't think so. I think what is promised is what keeps spend on it so high. I'd imagine if all the major AI companies were to come out and say "this is it, we've gone as far as we can", investment would likely dry up
But now instead of spending 10 hours working on that, he can go and work on something else that would otherwise have required another engineer.
It's not going to mean they can employ 0 engineers, but maybe they can employ 4 instead of 5 - and a 20% reduction in workforce across the industry is still a massive change.
Thats assuming a near 100% success rate from the agent, meaning it's not something he needs to supervise at all. It also assumes that the agent is able to take on the task completely, meaning he can go do something else which would normally occupy the time of another engineer, rather than simply doing something else within the same task (from the sounds of things, it was helping with debugging, not necessarily actually solving the bug). Finally, and most importantly, the 20% reduction in workforce assumes it can do this consistently well across any task. Saving 10h on one task is very different from saving 10h on every task.
Assuming all the stars align though and all these things come true, a 20% reduction in workforce costs is significant, but again, you have to compare that to the cost of investment, which is reported to be close to a trillion. They'll want to see returns on that investment, and I'm not sure a 20% cut (which, as above, is looking like a best case scenario) in workforce lives up to that.
Funnily enough in all my years of using git, this thread is the first time I've encountered merge. It sounds easier I suppose, but I don't really have a problem with rebase and will likely just continue as is
That leads to the obvious question; is the API next on the chopping block? Or would they just increase the API pricing to a point where they are A) making profit off it and B) nobody would use the API just for a different client?
I'm pretty sure everyone is pricing their APIs to break-even, maybe profit if people use caching properly (like GPT-5 can do if you mark the prompts properly)
A little off topic, but this seems like one of the better places to ask where I'm not gonna get a bunch of zealotry; a question for those of you who like using AI for software development, particularly using Claude Code or OpenCode.
I'll admit I'm a bit of a sceptic of AI but want to give it another shot over the weekend, what do people recommend these days?
I'm happy spending money but obviously don't want to spend a tonne since its just an experiment for me. I hear a lot of people raving about Opus 4.5, though apparently using that is near to $20 a prompt, Sonnet 4.5 seems a lot cheaper but then I don't know if I'm giving it (by it I mean AI coding) a fair chance if Opus is that much better. There's also OpenCode Zen, which might be a better option, I don't know.
If you want to try Opus you can get the lowest Claude plan for $20 for the month, which has enough tokens for most hobby projects. I've been using to vibe code some little utilities for myself and haven't hit the limits yet.
Oh nice, I saw people on reddit say that Opus 4.5 will hit that $20 limit after a 1-3 prompts, though maybe thats just on massive codebases. Like you, I'd just want to try it out on some hobby projects
> I saw people on reddit say that Opus 4.5 will hit that $20 limit after a 1-3 prompts
That's people doing real-vibe coding prompts, like "Build me a music player with...". I'm using the $20 Codex plan and with getting it to plan first and then executing (in the same way I, an experienced dev would instruct a junior) haven't even managed to exhaust my 5-hour window limits, let alone the weekly limit.
Also if you keep an eye on it and kill it if it goes in the wrong direction you save plenty of tokens vs letting it go off on one. I wasted a bunch when Codex took 25 minutes(!) to install one package because something went wrong and instead of stopping and asking it decided to "problem solve" on its own.
Claude Code is the best thing I’ve used, though it’s been a while since I tried other tools. If you want to give it a fair shot, that’s what I’d use. I use the Max plan, I think. I write my initial instructions in markdown with pseudo code for the parts that really matter.
Take some existing code and bundle it into a zip or tar file. Upload it to Gemini and ask it for critique. It's surprisingly insightful and may give you some ideas for improvement. Use one of the Gemini in-depth models like Thinking or Pro; just looking at the thinking process is interesting. Best of all, they're free for limited use.
Wanted to try more of what I guess would be the opposite approach (it writes the code and I critique), partially to give it a fair shake and partially just out of curiosity. Also I can't lie, I always have a soft spot for a good TUI which no doubt helps
Oh nice, so Claude/OpenAI isn't as important as (Claude)Code/Codex/OpenCode these days? How is opencode in comparison, the idea of zen does seem quite nice (a lot of flexibility to experiment with different models), though it does seem like a bit more config and work upfront than CC or codex
I'd say OpenCode > Codex > Claude Code in terms of the TUI interface UX. OpenCode feels a lot nicer to use. I haven't noticed a code quality difference, only a difference in the UX
I'm not sure about Zen, but OpenAI seems to be giving me $20 / week worth of tokens within the $20/month
Also for absolutely free models, MiniMax M2.1 has been impressive and useful to me (free through OpenCode). Don't judge the state of the art through the lens of that, though
Bit on an update on Zen, it looks like Anthropic have blocked Claude usage outside of Claude Code, so if I did want to use Opus, it'd have to be through that. They might reverse it or OpenCode might find a way round it, but overall I'd say at this point its safest to assume, if you're starting fresh with this, you go with one or the other.
Still not sure which one I'll go with, though I can't say I feel too keen to get into Claude after that
> Also, you have to learn it right now, because otherwise it will be too late and you will be outdated, even though it is improving very fast allegedly.
This in general is a really weird behaviour that I come across a lot, I can't really explain it. For example, I use Python quite a lot and really like it. There are plenty of people who don't like Python, and I might disagree with them, but I'm not gonna push them to use it ("or else..."), because why would I care? Meanwhile, I'm often told I MUST start using AI ("or else..."), manual programming is dead, etc... Often by people who aren't exactly saying it kindly, which kind of throws out the "I'm just saying it out of concern for you" argument.
fear of missing out, and maybe also a bit of religious-esque fever...
tech is weird, we have so many hype-cycles, big-data, web3, nfts, blockchain (i once had an acquaintance who quit his job to study blockchain cause soon "everything will be built on it"), and now "ai"... all have usefulness there but it gets blown out of proportion imo
Nerd circles are in no way immune to fashion, and often contain a strong orthodoxy (IMO driven by cognitive dissonance caused by the humbling complexity of the world).
Cargo cults, where people reflexively shout slogans and truisms, even when misapplied. Lots of people who’ve heard a pithy framing waiting for any excuse to hammer it into a conversation for self glorification. Not critical humble thinkers, per se.
Hype and trends appeal to young insecure men, it gives them a way to create identity and a sense of belonging. MS and Oracle and the rest are happy to feed into it (cert mills, default examples that assume huge running subscriptions), even as they get eaten up by it on occasion.
There are a lot of comments to the tune of "why does a CSS library need 1m+ (or any money at all) to survive?". I'm no expert on this kind of thing, but Tailwind 0.1.0 first released on November 2017. Since then, there's been continual improvements up until last month with 4.1.18, totalling 8 years of dev work. A simple CSS library wouldn't have much need to go past 0.1.0, certainly not 1.0.0. Clearly tailwind did, which would imply there's more than meets the eye.
But you can't have it both ways, it can't be just a simple CSS library that doesn't need that much money, but also expect a decade of work+ on it. After all, this originally stems from the fact that a PR attempting to improve something didn't get merged in; a technically finished project would have the same problem, but that would be the rule rather than the exception.
I'm more of a backend guy but afaik most popular backend frameworks like Django, Rails, Laravel etc have 10+ years of top-level work and run on much smaller annual budgets.
Not saying that it's right, and there's a whole philosophical debate about open source being financially sustainable, but in terms of "You can't expect a decade of work for free" - I think you can and many people do.
> "You can't expect a decade of work for free" - I think you can and many people do.
You can't. People can give a decade of work away for free and thats a very nice thing to do, but its not an obligation and never should be. You are right, people are now expecting it, and that's why the push against that expectation is so important.
I had a similar thought. If a project like Vue or Nuxt can stay afloat with consistent development and updates, without suffering financial difficulties, then it's worth asking why Tailwind hasn't been able to do the same. Yes it is a huge project, with incredible support across all browsers, and needs a lot of care. That's for sure. But I think the business decisions taken by the Tailwind team can be put in the spotlight in this case.
I could dig and fill in holes in my backyard for 8 years but that doesn't mean I created value or justified the time spent. The library has been good enough for widespread adoption since like 2020 at the latest - did it really need a team of 9 people working on it the last six years? What is there to show for that?
Sure, but if you stop digging and filling in those holes nobody is gonna care. People clearly do care if Tailwind stops development, thats where this whole thing stemmed from; someone opened a PR and it wasn't getting merged in
If there is no value in newer Tailwind versions, then why would anybody upgrade past 1.0? Clearly there is value that you don't recognize.
I mean, I'm not a Tailwind user so I don't either. But it's incredibly easy to take open source value for granted. That's why so many maintainers burn out.
> shouldn't we be seeing a ton of 1 person startups?
After months of hearing that people are producing software in months that would normally take years, the best examples of vibe coded software I've seen look like they would normally take months, not years. If you don't care how they're built or how long it took (which a user generally doesn't), much of the remaining shine comes off.
If I'm wrong, I'd love to see it. A genuinely big piece of software produced entirely (or near entirely?) with AI that would've normally taken talented engineers years to build.
DO you have any idea of the man hours it took to build those large projects you are speaking of? Lets take Linux for example. Suppose for the sake of argument that Claude Code with Opus 4.5 is as smart as an average person(AGI), but with the added benefit that he can work 24/7. Suppose now i have millions of dollars to burn and am running 1000 such instances on max plans. Now if I have started running this agent since the date Claude Opus 4.5 was released and i prompted him to create a commercial-grade multi-platform OS from the caliber of Linux.
An estimate of the linux kernel is 100 million man hours of work. divide by 1000. We expect to have a functioning OS like Linux by 2058 from these calcualtions.
How long has claude been released? 2 months.
Linux is valuable, because very difficult bugs got fixed over time, by talented programmers. Bugs which would cause terrible security problems of external attacks, or corrupted databases and many more.
All difficult problems are solved, by solving simple problems first and combining the simple solutions to solve more difficult problems etc etc.
Claude can do that, but you seriously overestimate it's capabilities by a factor of a thousand or a million.
Code that works but it is buggy, is not what Linux is.
Linux is 34 years old, most large software projects are not. Also your using a specific version of Claude, and sure maybe this time is different (and every other time I've heard that over the past 5 years just isn't the same). I don't buy it, but lets go along with it. Going off that, we have the equivalent of 2 years development time according to whats being promised. Have you seen any software projects come out of Claude 4.5 Opus that you'd guess to have been a 2 year project? If so, please do share
I’m building an ERP system, I’ve already been at it for a 3 years (full time, but half the system is already in production with two tenants so not all of my time is spent on completing the product, this revenue completely sustains the project). AI is now speeding this up tremendously. Maybe 2x velocity, which is a game changer but more realistic than what you hear. The post AI features are just as good and stable as pre AI, why wouldn’t they be? I’m not going to put “slop” into my product, it’s all vetted by me. I do anticipate that when the complexity is built out and there are less new features and more maintaining/improving, the productivity will be immense.
I'm not discounting your experience, but purely from experiment design, you don't have any sort of pre/post AI control. You've spent 3 years becoming a subject-matter expert who's building software in your domain; I'm not surprised AI in it's current form is helpful. A more valuable comparison would be something like If you kept going without AI, how long would it take someone with similar domain experience who's just starting their solution to catch using AI?
reply