Hacker Newsnew | past | comments | ask | show | jobs | submit | aurareturn's commentslogin

I liked contact and I liked interstellar.

But you might not need 5 tech writers anymore. Just 1 who controls an LLM.

Perhaps. Could the same be said for engineers?

Yes. That could be said for engineers as well.

If the business can no longer justify 5 engineers, then they might only have 1.

I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.

IE:

2022: 100 companies employ 10,000 engineers

2026: 1000 companies employ 10,000 engineers

The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.


There's another scenario... 100 companies employ 1000 engineers

The person you're replying to is obviously and explicitly aware that that is another scenario, and the whole point of their comment was to argue against it and explain why they think something else is more likely. Merely restating the thing they were already arguing against adds nothing to the discussion.

Why do you think this outcome is more likely?

Because this is what capital has told us. Capital always wants to reduce the labour cost to $0.

If labor cost is close to $0, even more businesses that weren’t viable before would become viable.

Do you not see the logic?


Demand is the driver not only the cost.

Not true. Sell a $0.50 coffee next to Starbucks with the same quality and it will drive demand. Lower prices drive demand.

So, more people will start drinking coffee all together or same amount of people will be redistributed across Starbucks and my new fancy espresso bar?

More people will drink coffee if it’s cheaper and some people from Starbucks will order from you now.

It’s not controversial economics that lower prices drive more demand.


ai is bad because it's automated his job. luddites in tech. a real contradiction

Not really a contradiction, since the entire point of jobs and the economy at all is to serve the specific needs of humanity and not to maximize paper clip production. If we should be learning anything from the modern era it's something that should have always been obvious: the Luddites were not the bad guys. The truth is you've fallen for centuries old propaganda. Hopefully someday you'll evolve into someone who doesn't carry water for paperclip maximizers.

Luddites are a consistent problem regardless of domain. Planck's principle was born in physics.

Zero labor cost should see the number of engineers trend towards infinity. The earlier comment suggested the opposite — that it would fall to just 1000 engineers. That would indicate that the cost of labor has skyrocketed.

That doesn't make sense. Demand isn't entirely dictated by cost. There is only so much productivity the world is equipped to consume.

What difference does that make? If the cost of an engineer is zero, they can work on all kinds of nonsensical things that will never be used/consumed. It doesn't really matter as it doesn't cost anything.

I'm kinda baffled by your suggestion. That's just not how people or organizations run by people operate. Cost is not the only driver to demand.

> That's just not how people or organizations run by people operate.

Au contraire. It's not very often that the cost of labor actually drops to anywhere close to zero, but we have some examples. The elevator operator is a prime example. When it was costly to hire an operator we could only hire a few of them. Nowadays anyone who is willing to operate an elevator just has to show up and they automatically get the job.

If 1,000 engineers are worth having around, why not an infinite number of them, just like those working as elevator operators? Again, there is no cost in this hypothetical scenario.

> Cost is not the only driver to demand.

Technically true, but we're not talking about garbage here. Humans are always valuable to some degree, just not necessarily valuable enough when there is a cost to balance. But, again, we're talking about zero cost. I expect you are getting caught up in thinking about scenarios where labor still has a cost, perhaps confusing zero cost with zero payroll?


Yes and no.

Five engineers could be turned into maybe two, but probably not less.

It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.

If both leave then you're screwed.

If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.


This was true even before LLMs. Development has always scaled very poorly with team size. A team of 20 heads is like at most twice as productive as a team of 5, and a team of 5 is marginally more productive than a team of 3.

Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.

This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.


That assumes your backlog is finite.

Is the tech writers backlog also seemingly infinite like every tech backlog I've ever seen?


The tech writer backlog is probably worse, because writing good documentation requires extensive experience with the software you're writing documentation about and there are four types of documentation you need to produce.

Yes. Yes it is.

Yes. I have been building software and acting as tech lead for close to 30 years.

I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.

Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.

But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.

Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.


"And it's just plain better at writing code than 60% of my graduating class was back in the day".

Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?

I'm starting to think about a risk of technological stagnation in many areas.


> Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?

Try it. The pattern matching these things do is unlike anything seen before.

I'm writing a compiler for a language I designed, and LLMs have no trouble writing examples and tests. This is a language with syntax and semantics that does not exist in any training set because I made it up. And here it is, a machine is reading and writing code in this language with little difficulty.

Caveat emptor: it is far from perfect. But so are humans, which is where the training set originated.

> I'm starting to think about a risk of technological stagnation in many areas.

That just does not follow for me. We're in an era where advancements in technology continues to be roughly quadratic [1]. The implication you're giving is that the advancements are a step function that will soon (or has already) hit its final step.

This suggests that you are unfamiliar or unappreciative of how anything progresses, in any domain. Creativity is a function of taking what existed before and making it your own. "Standing on the shoulders of giants", "pulling oneself up by the bootstraps", and all that. None of that is changing just because some parts of it can now be automated.

Stagnation is the very last thing I would bet on. In part because it means a "full reset" and loss of everything, like most apocalyptic story lines. And in part because I choose to remain cautiously optimistic.

[1]: https://ourworldindata.org/technology-long-run


We have been seeing this happen in real time in the past two years, no?

Yes. But they are now called managers.

How do you prevent an agent that simply console.logs(process.env.SUPER_SECRET) and then looking at the log?

Great question! You might enjoy this writeup, which in one section explores avoiding the use of shell variables that are not exported as a method of mitigating this risk.

https://linus.schreibt.jetzt/posts/shell-secrets.html


Your app run in the app context, that is not accessible for an AI.

You don't let your agent look at logs? How can it debug?

Are you using 16bit for inference? How many tokens/second if you use 8bit?

Given that SOTA models now use 4bit inference, can you do an estimation for 4bit + Blackwell?


Hi! This benchmarking was done w/ DeepSeek-V3's published FP8 weights. And Blackwell performance is still being optimized. SGLang hit 14k/s/B200 though, pretty cool writeup here: https://lmsys.org/blog/2025-09-25-gb200-part-2/

I think they're also running this at 16 bit quant. If they lower it to 8bit, they might double their output which might come out to be 11 cents per million tokens.

Now take into account that modern LLMs tend to use 4bit inference, and Blackwell is significantly more optimized for 4 bit, we can see much less than 11 cents. Maybe a speed up of 5x if using 4bit and Blackwell vs H100 and 8 bit?

So we're looking at potentially 2.2 cents per million tokens.


  Sky-high credit card interest rates do not reflect supply and demand. Instead, they mostly reflect business practices that victimize consumers. As research reported by the Federal Reserve Bank of New York documents, credit card companies spend vast sums on marketing. Once they have pulled customers in, they then use their market power to charge exorbitant interest rates.
So why don't businesses offer an alternative with far lower interest rates and why haven't they replaced high credit card interest rates? It seems like a slam dunk business case if Paul Krugman is right here. No one wants to pay 20% credit card interest rates if they have a viable alternative that charges 7%.

My sense is that it is driven by supply and demand. There's are probably reasons why no other mainstream business can offer lower rates. Credit card interest payers are likely very risky borrowers so credit card companies need to have high interest rates.


Eventually, vibe coding will need a rebrand because it won't be "vibing". It will or already is the main way code is written in 2026.

Models are becoming less like commodities. They're differentiating with strengths and weaknesses. When Chinese labs gain more traction, they will stop releasing their models for free. At that point, everyone who wants SOTA models will have to pay.

Having to pay has nothing to do with a good being a commodity. I have to pay for sugar, but there is no big difference between brands that justify any of them commanding a monopoly rent, so, sugar is a commodity. The same is more or less true of LLMs right now and unless someone comes up with a new paradigm beyond the transformers architecture, there is no reason to believe this commodification trend is going to be reversed.

Most of the differentiation is happening on the application/agent layer. Like Coworker.

The rest of it, is happening on post-training. Incremental changes.

We are not talking about EUV lithography here. There are no substantial moths of years of pure and applied research protected by patents.


Normal software has way less moat than SOTA labs.

SOTA AI models can have different architectures, vastly different compute in training, different ways of inferencing, different input data, different RL, and different systems around the model. Not to mention the significant personal user data that OpenAI is collecting.

Saying SOTA AI models are like sugar is insane.


What matter are the results. I'd say that Chinese companies are probably getting a better ROI from their projects using models a few months behind western models, because any deficiency in the models is surpassed by better engineering and better market fit of what they are building over those agents.

Chinese companies like Alibaba just came out and said they're falling farther behind from American companies due to lack of compute.

Compute is a moat.


Strange considering all the compute in the world is built over there in South East Asia.

Not just rest of the sentence. In my opinion, autocorrect desperately needs to take into account the context of the current screen.

There are many times I want to type the same word that is already on the app screen but it autocorrects me to something completely different.


And they could add predictive text to other languages too, it's not rocket science.

The current system suggests words I have never used, will never use and have never heard before instead of the obvious choice.


Seems like there is a moat after all.

The moat is talent, culture, and compute. Apple doesn't have any of these 3 for SOTA AI.


It is more like Apple have no need to spend billions on training with questionable ROI when it can just rent from one of the commodity foundation model labs.

I don't know why people automatically jump to Apple's defense on this.... They absolutely did spend a lot of money and hired people to try this. They 100% do NOT have the open and bottom-up culture needed to pull off large scale AI and software projects like this.

Source: I worked there


Well, they stopped.

Culture is overrated. Money talks.

They did things far more complicated from an engineering perspective. I am far more impressed by what they accomplished along TSMC with Apple Silicon than by what AI labs do.


Is Apple silicon really that impressive compared to LLMs? Take a step back. CPUs have been getting faster and more efficient for decades.

Google invented the transformer architecture, the backbone of modern LLMs.


> Google invented...

"Google" did? Or humans who worked there and one who didn't?

https://www.wired.com/story/eight-google-employees-invented-...

In any case, see the section on Jakob Uszkoreit, for example, or Noam Shazeer. And then…

> In the higher echelons of Google, however, the work was seen as just another interesting AI project. I asked several of the transformers folks whether their bosses ever summoned them for updates on the project. Not so much. But “we understood that this was potentially quite a big deal,” says Uszkoreit.

Worth noting the value of “bosses” who leave people alone to try nutty things in a place where research has patronage. Places like universities, Xerox, or Apple and Google deserve credit for providing the petri dish.


You can understand how transformers work from just reading the Attention is All You Need paper, which is 15 pages of pretty accessible DL. That's not the part that is impressive about LLMs.

It’s such a commodity that there are only 3 SOTA labs left and no one can catch them. I’m sure it’ll be consolidated further in the future and you’re going to be left with a natural monopoly or duopoly.

Apple has no control over the most important change to tech. They have control to Google.


> It’s such a commodity that there are only 3 SOTA labs left and no one can catch them.

No one can outpace them in improving the SOTA, everyone can catch up to them. Why are open-weight models perpetually 6 months behind the SOTA? Given enough data harvested from SOTA models you can eventually distill them.

The biggest differentiator when training better models are not some new fancy architectural improvements (even the current SOTA transformer architectures are very similar to e.g. the ancient GPT-2), but high quality training data. And if your shiny new SOTA model is hooked into a publicly available API, guess what - you've just exposed a training data generator for everyone to use. (That's one of the reasons why SOTA labs hide their reasoning chains, even though those are genuinely useful for users - they don't want others to distill their models.)


Four. You forgot xAI. And that's ignoring the Chinese labs.

Chinese labs aren’t SOTA due to lack of compute.

Yes I forgot xAI. So 4 left. I’m betting that there will be one or two dominant ones in next 10 years. Apple won’t be one of them.


Really, don't believe benchmarks as gospel. Chinese models are pretty much competitive with offerings from Anthropic, OpenAI or Google. Meta is currently at a disadvantage, but I believe they will find their mojo and soon be competitive again.

Frankly, a lot of times I prefer using GLM 4.6 running on Cerebras Inference, than having to deal with the performance hiccups from Claude. For most practical purposes, I've seen no big penalty in using it compared to Opus 4.5, even the biggest qwen-coder models are pretty much competitive.

Between me and the company I work for, I spend some serious money with AI. I use it extensively in my main job, on two side projects that I have paying customers for, and for graduate school work. I can tell you that there quite a few more SOTA models around than what the benchmarks tell you.


is it that surprising? they're a hardware company after all.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: