Hacker Newsnew | past | comments | ask | show | jobs | submit | sinenomine's commentslogin

Humanoids are much cheaper than a car or an ev to manufacture at scale - the economics for humanoids is potentially very scalable and efficient. The solid state batteries are remarkably dense too, and battery replacement via dockstations has already been implemented in some models.


The hardware is great and can definitely scale. That's why as a caveat I think teleoperation is a good general purpose application cluster for these.

But I really struggle to come up with any other economically viable short-term use cases, even with great hardware...


It's a hard problem, but deep learning is very scalable and general and the pressure for general robotics to be solved is very strong in China and US, given the demographic shifts. I think the proliferation of humanoids is a near certainty over the next 8 years, ofc it won't be uniform and licensed labor won't be replaced.

Note that we are only starting to see the (much smaller compared to llms) DL data scaling in robotics - almost entire previous research has been achieved with very small robot fleets.

I think scaling data from industrial-sized robot fleets will lead to quick solution of various general robotics capabilities.


Ok but can we get into the nuts and bolts of what we actually want these robots to do?

Because every time I think of something, either an existing industrial setup can or will do it better, or a special-purpose device will beat it.

So general intelligence + general form factor (humanoid) sounds great, if feasible. But what will it do exactly? And then let's do a reality check on said application.


There are high-quality linear or linear-ish attention implementations for the scales around 100k... 1M. The price of context can be made linear and moderate, and it can be greatly improved by implementing prompt caching and passing savings to users. Gpt-5.2-xhigh is good at this and from my experience has markedly higher intelligence and accuracy compared to opus-4.5, while enjoying lower price per token.


Monetary policy, software tax, post-covid hiring glut, pervasive mental health issues in HR professionals. For older pros there is also age discrimination. There is also underestimated factor of hiring by committee which more and more commonly disguises ethnic nepotism in hiring decisions.


I think that’s a fair list, and it highlights how much of the process sits outside the candidate’s control.

Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause. What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.

From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?


GPT-5.2 has radically changed my outlook on OpenAI. Head and shoulders above others.

The excellence is there.


I'm also a happy customer.

But, one thing has been consistent for the past 3 years: After every release from all the serious competitors, the hype can go either way.

As far as the hype cycles go, OpenAI is oscillating between "Best model ever" and "What a letdown, it's over" at least twice a year.

The competition is fierce, and a never-ending marathon of all the players getting ahead just a bit. No clear long-term winner.


5.2 is good. But at this point every few months company A trumps company B with a new “SOTA” (for some definition of SOTA).

OpenAI has no real moat. Anthropic is focusing on developers as a clear target, and Gemini has the backing of Google.

I don’t see OpenAI winning the AI race with marginally better models and arguably a nicer UI/UX (ymmv, but I do like the ChatGPT app experience).

That said, my usage decreases month over month.


Really, I found 5.2 to be a rather weak model. It constantly gives me code that doesn't work and gets simple APIs wrong. Maybe it's just weak on the domain I'm working in.


It hasn't done great in the head-to-head comparisons I've seen, and has trailed in several areas on the leaderboards (though not all).

Based on the 'code red' they declared, this model seems to have been rushed a bit.


AGI, Fusion/solar, Robotics

Fixed it for you


If you are 40 and haven't transitioned from a linear employee to manager or a small shareholder, your trajectory is that of jaded sadness. I write this to those who are still young enough to read and listen.


Almost all of the couple-hundred employees laid off at my company in the past year have been managers.

For me, I paid off all my debts, and I'm reducing my spending to build up a big stockpile to weather a rough period or large salary decrease. TBH I'd rather find other kinds of work than lean into AI tooling. It's so boring & demoralizing.


This is the wrong advice in my view. Senior engineers are the ones more empowered by AI than anyone else, provided they update their skills.


What a world we will live in when everyone with skill and experience has moved in to management. Then who will be doing all the work?


this happened with all manner of engineering in America. Industry is power-driven, and only-workers do not protect a place to stand. At the same time, massive fortunes were made, and many, many companies died. Its not a static environment.


Did they prove the causal relationship, especially not via genetic confounding?


You can find if you read the whole article


Have you tried Kimi K2, deepseek R1 and Qwen?


NLL loss and large-batch training regime inherently bias the model to learn “modal” representation of the world, and RLHF additionally collapses enthropy, especially as it is applied at most leading labs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: