Humanoids are much cheaper than a car or an ev to manufacture at scale - the economics for humanoids is potentially very scalable and efficient. The solid state batteries are remarkably dense too, and battery replacement via dockstations has already been implemented in some models.
It's a hard problem, but deep learning is very scalable and general and the pressure for general robotics to be solved is very strong in China and US, given the demographic shifts. I think the proliferation of humanoids is a near certainty over the next 8 years, ofc it won't be uniform and licensed labor won't be replaced.
Note that we are only starting to see the (much smaller compared to llms) DL data scaling in robotics - almost entire previous research has been achieved with very small robot fleets.
I think scaling data from industrial-sized robot fleets will lead to quick solution of various general robotics capabilities.
Ok but can we get into the nuts and bolts of what we actually want these robots to do?
Because every time I think of something, either an existing industrial setup can or will do it better, or a special-purpose device will beat it.
So general intelligence + general form factor (humanoid) sounds great, if feasible. But what will it do exactly? And then let's do a reality check on said application.
There are high-quality linear or linear-ish attention implementations for the scales around 100k... 1M. The price of context can be made linear and moderate, and it can be greatly improved by implementing prompt caching and passing savings to users. Gpt-5.2-xhigh is good at this and from my experience has markedly higher intelligence and accuracy compared to opus-4.5, while enjoying lower price per token.
Monetary policy, software tax, post-covid hiring glut, pervasive mental health issues in HR professionals. For older pros there is also age discrimination. There is also underestimated factor of hiring by committee which more and more commonly disguises ethnic nepotism in hiring decisions.
I think that’s a fair list, and it highlights how much of the process sits outside the candidate’s control.
Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause.
What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.
From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?
Really, I found 5.2 to be a rather weak model. It constantly gives me code that doesn't work and gets simple APIs wrong. Maybe it's just weak on the domain I'm working in.
If you are 40 and haven't transitioned from a linear employee to manager or a small shareholder, your trajectory is that of jaded sadness. I write this to those who are still young enough to read and listen.
Almost all of the couple-hundred employees laid off at my company in the past year have been managers.
For me, I paid off all my debts, and I'm reducing my spending to build up a big stockpile to weather a rough period or large salary decrease. TBH I'd rather find other kinds of work than lean into AI tooling. It's so boring & demoralizing.
this happened with all manner of engineering in America. Industry is power-driven, and only-workers do not protect a place to stand. At the same time, massive fortunes were made, and many, many companies died. Its not a static environment.
NLL loss and large-batch training regime inherently bias the model to learn “modal” representation of the world, and RLHF additionally collapses enthropy, especially as it is applied at most leading labs.