Hacker Newsnew | past | comments | ask | show | jobs | submit | throw1235435's commentslogin

The question is how rapid the adoption is. The price of failure in the real world is much higher ($$$, environmental, physical risks) vs just "rebuild/regenerate" in the digital realm.

Military adoption is probably a decent proxy indicator - and they are ready to hand the kill switch to autonomous robots

Maybe. There the cost of failure again is low. Its easier to destroy than to create. Economic disruption to workers will take a bit longer I think.

Don't get me wrong; I hope that we do see it in physical work as well. There is more value to society there; and consists of work that is risky and/or hard to do - and is usually needed (food, shelter, etc). It also means that the disruption is an "everyone" problem rather than something that just affects those "intellectual" types.


How many pay? And out of that how many are willing to pay the amount to at least cover the inference costs (not loss leading?)

Outside the verifiable domains I think the impact is more assistance/augmentation than outright disruption (i.e. a novelty which is still nice). A little tiny bit of value sprinkled over a very large user base but each person deriving little value overall.

Even as they use it as search it is at best an incrementable improvement on what they used to do - not life changing.


I also believe this - I think it will probably just disrupt software engineering and any other digital medium with mass internet publication (i.e. things RLVR can use). For the short term future it seems to need a lot of data to train on, and no other profession has posted the same amount of verifiable material. The open source altruism has disrupted the profession in the end; just not in the way people first predicted. I don't think it will disrupt most knowledge work for a number of reasons. Most knowledge professions have "credentials' (i.e. gatekeeping) and they can see what is happening to SWE's and are acting accordingly. I'm hearing it firsthand at least locally in things like law, even accounting, etc. Society will ironically respect these professions more for doing so.

Any data, verifiability, rules of thumb, tests, etc are being kept secret. You pay for the result, but don't know the means.


I mean law and accounting usually have a “right” answer that you can verify against. I can see a test data set being built for most professions. I’m sure open source helps with programming data but I doubt that’s even the majority of their training. If you have a company like Google you could collect data on decades of software work in all its dimensions from your workforce

It's not about invalidating your conclusion, but I'm not so sure about law having a right answer. At a very basic level, like hypothetical conduct used in basic legal training matrerials or MCQs, or in criminal/civil code based situations in well-abstracting Roman law-based jurisdictions, definitely. But the actual work, at least for most lawyers is to build on many layers of such abstractions to support your/client's viwepoint. And that level is already about persuasion of other people, not having the "right" legal argument or applying the most correct case found. And this part is not documented well, approaches changes a lot, even if law remains the same. Think of family law or law of succession - does not change much over centuries but every day, worldwide, millions of people spend huge amounts of money and energy on finding novel ways to turn those same paragraphs to their advantage and put their "loved" ones and relatives in a worse position.

Not really. I used to think more general with the first generation of LLM's but given all progress since o1 is RL based I'm thinking most disruption will happen in open productive domains and not closed domains. Speaking to people in these professions they don't think SWE's have any self respect and so in your example of law:

* Context is debatable/result isn't always clear: The way to interpret that/argue your case is different (i.e. you are paying for a service, not a product)

* Access to vast training data: Its very unlikely that they will train you and give you data to their practice especially as they are already in a union like structure/accreditation. Its like paying for a binary (a non-decompilable one) without source code (the result) rather than the source and the validation the practitioner used to get there.

* Variability of real world actors: There will be novel interpretations that invalidate the previous one as new context comes along.

* Velocity vs ability to make judgement: As a lawyer I prefer to be paid higher for less velocity since it means less judgement/less liability/less risk overall for myself and the industry. Why would I change that even at an individual level? Less problem of the commons here.

* Tolerance to failure is low: You can't iterate, get feedback and try again until "the tests pass" in a court room unlike "code on a text file". You need to have the right argument the first time. AI/ML generally only works where the end cost of failure is low (i.e can try again and again to iron out error terms/hallucinations). Its also why I'm skeptical AI will do much in the real economy even with robots soon - failure has bigger consequences in the real world ($$$, lives, etc).

* Self employment: There is no tension between say Google shareholders and its employees as per your example - especially for professions where you must trade in your own name. Why would I disrupt myself? The cost I charge is my profit.

TL;DR: Gatekeeping, changing context, and arms race behavior between participants/clients. Unfortunately I do think software, art, videos, translation, etc are unique in that there's numerous examples online and has the property "if I don't like it just re-roll" -> to me RLVR isn't that efficient - it needs volumes of data to build its view. Software sadly for us SWE's is the perfect domain for this; and we as practitioners of it made it that way through things like open source, TDD, etc and giving it away free on public platforms in numerous quantities.


Tbh this whole AI thing is probably a negative ROI but it will pay off. Even if the debt is written off the AI enhancements that this whole misallocation of capital created are now "sunk" and are here to stay - the assets and techniques have been built.

There's an element of arms race between players, and the genie is out of the bottle now so have to move with it. Game theory is more driving this than economics in the short term.

Marginal gains on top of these investments probably have a ROI now (i.e. new investments from this point).


There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want.

Indeed it did; I remember those times. All else being equal I still think SWE salaries on average would of been higher if we kept it like that given basic economics - there would of been a lot less people capable of doing it but the high ROI automation opportunities would of still been there. The fact that "it sucked" usually creates more scarcity on the supply side; which all being equal means higher wages and in our capitalist society - status. Other professions that are older as to the parent comment already know this and don't see SWE as very "street smart" disrupting themselves. I've seen articles recently like "at least we aren't in coding" from law, accounting, etc an an anecdote to this.

With AI at least locally I'm seeing the opposite now - less hiring, less wage pressure and in social circles a lot less status when I mention I'm a SWE (almost sympathy for my lot vs respect only 5 years ago). While I don't care for the status aspect, although I do care for my ability to earn money, some do.

At least locally inflation adjusted in my city SWE wages bought more and were higher in general compared to others in the 90's-2000's than on wards (ex big tech). Partly because this difficulty and low level knowledge meant only very skilled people could participate.


Monopolizing the work doesn't work unless you have the power to suppress anyone else joining the competition, i.e. "certified developers only".

Otherwise people would have realized they can charge 3x as much by being 5x as productive with better tools while you're writing your code in notepad for maximum ROI, and you would have either adjusted or gone out of business.

Increased productivity isn't a choice, it's a result of competition. And that's a good thing overall, even if it sucks for some developers who now have to actually work for the first time in decades. But it's good for society at large, because more things can be done.


Sure - I agree with that, and I agree its good for society but as you state probably not as good for the SWE who has to work harder for the same which was my point and I think you agree. Other professions have done what you have stated (i.e. certification) and seen higher wages than otherwise which also proves my point. They see this as the "street smart" thing to do, and generally society respects them for it putting their profession on a higher pedestal as a result. People respect people who take care of themselves first generally I find as well. Personally I think there should be a balance between the two (i.e. a fair go for all parties; a fair day's work with some job security over a standard career lifetime but not extortionary).

Also your notion of "better tools" may of not happened, or happened more slowly without open source, AI, etc which would of meant higher salaries for longer most probably. That's where I disagree with the parent poster's claim of higher salaries - AI seems to be a great recent example of "better tools" disrupting the premium SWE's enjoy rather than improving their salaries. Whether that's fair or not is a different debate.

I was just doubting the notion of the parent comment that "open source software" and "automated testing" create higher salaries. Usually efficiency economically (some exceptional cases) creates lower salaries for the people who are made more efficient all else being equal - and the value shifts from them to either consumers or employers.


> Other professions have done what you have stated (i.e. certification) and seen higher wages than otherwise which also proves my point.

I'd generally agree with that if it regards to safety (e.g. industrial control systems), but we manage that by certifying the manufacturer, not the individual developer. But otherwise I think it's harmful to society, even if beneficial to the individuals - but there's a lot of things falling in that bucket, and it's usually not the things we strive for at a societal level.

In my experience, getting better and faster has always translated into being paid more. I don't know that there's a direct relationship to specific tools, but I'm pretty sure that the mainstreaming of software development has caused the huge inflation of total comp that you see in many companies. If it was slow and there's only this handful of people that can do it, but they're not really adding a huge amount of value, you wouldn't be seeing that kind of multiplier vs the average job.


> But otherwise I think it's harmful to society, even if beneficial to the individuals

I disagree a little in that stability/predictability to people also adds some benefit to society - constant disruption/change for the sake of efficiency I believe at extreme levels would be bad for mental health at the very least and probably cause some level of outrage and dysfunction. I know as an SWE tbh I'm feeling a bit of it - can't imagine if it was everyone.

I personally think there is a tradeoff; people on average have limits to adaptability in their lifetimes and so it needs to be worth it for people to invest and enter in a given profession (some level of economic profit that makes their limited time worth spending in it). It shouldn't be excessive though - it should be where both client and producer get fair/equal value for the time/effort they both need to put in.


> ex big tech

I mean, this seems like a pretty big thing to leave out, no? That's where all the crazy high salaries were!

Also, there are still legacy places that more or less build software like it's 1999. I get the impression that embedded, automotive, and such still rely a lot on proprietary tools, finicky manual processes, low level languages (obviously), etc. But those are notorious for being annoying and not very well paid.


I'm talking about what I perceive to be the median salary/conditions with big tech being only a part of that. My point is more that I remember back in that period good salaries could be had outside big tech too even in the boring standard companies that you state. I remember banks, insurance, etc paying very well for example compared to today for an SWE/tech worker - the good opportunities seemed more distributed. For example I've seen contract rates for some of the people we hire haven't really changed for 10 years for developers. Now at best they are on par with other professional white collar workers; and the competition seems fiercer (e.g. 5 interviews for a similar salary with leetcode games rather than experienced based interviews).

Making software easier and more abstract has allowed less technical people into the profession, allowed easier outsourcing, meant more competition/interview prep to filter out people (even if the skills are not used in the job at all), more material for AI to train on, etc. To the parent comment's point I don't think it has boosted salaries and/or conditions on average for the SWE - in the long run (10 years +) it could be argued that economically the opposite has occurred.


> Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.

My meaning could be in higher purposes; however I still need a job to be enable/pursue those things. If AI takes the meaning out of your craft it takes out the ability to use it to pursue higher order principles as well for most people, especially if you aren't in the US/big tech scene with significant equity to "make hay while the sun is still shining".


Jevons paradox I doubt applies to software sadly for SWE's; or at least not in the way they hope it does. That paradox implies that there are software projects on the shelf that have a decent return on investment (ROI) but aren't taken up because of lack of resources (money, space, production capacity or otherwise). In general unlike physical goods usually the only resource lacking is now money and people which means the only way for more software to be built is lower value projects now stack up.

AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.


If we had done what you say (distribute wealth more evenly between people/corporations) more to the point I don't know if AI would of progressed as it has - companies would of been more selective with their investment money and previously AI was seen at best as a long shot bet. Most companies in the "real economy" can't afford to make too many of these kind of bets in general.

The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.


Transformers grew out of the value-add side (autotranslation), though, not really the ad business side iirc. Value-add work still gets done in high-progressive-tax societies if it's valuable to a large fraction of people. Research into luxury goods is slowed by progressive tax rates, but the actual border between consumer and luxury goods actually rises a bit with redistributed wealth; more people can afford smartphones earlier and almost no one buys superyachts and so reinvestment into general technology research may actually be higher.

And I'm sure none of it was based on any public research from public universities, or private universities that got public grants.

Sure. I just know in most companies (seeing the numbers on projects in a number of them across industries now) funding projects which give time for people to think, ponder, publish white papers of new techniques is rare and economically not justifiable against other investments.

Put it this way - to have a project where people have the luxury to scratch their heads for awhile and to bet on something that may not actually be possible yet is something most companies can't justify to finance. Listening to the story of the transformer invention it sounds like one of these projects to me.

They may stand on the shoulders of giants that is true (at the very least they were trained in these institutions) but putting it together as it was - that was done in a commercial setting with shareholder funds.

In addition given the disruption to Google in general LLM's have done I would say, despite Gemini, it may of been better cost/benefit wise for Google NOT to invent the transformer architecture at all/yet or at least not publish a white paper for the world to see. As a use of shareholders funds the activity above probably isn't a wise one.


As I mentioned in another comment they smell blood in our profession, and as entities dependent on investor/VC/seed money rounds they want it. There's a reason every new model that comes out has a blog post "best at coding" often in their main headline - its also a target that people outside of tech don't really care about IMO unlike for example art and writing.

Tbh if it wasn't for coding disruption I don't think the AI boom would of really been that hyped up.


> don't think the AI boom would of really been that hyped up.

For one thing, LLMs aren't terrible at grammar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: