Will companies be willing to pay more to send junk mail if it is no longer largely subsidized? In this regard it could be a good thing assuming they don’t already have a regulation against junk mail there.
The article explained it fairly well. Re-iterating the credit card churning example: people will spend a lot of time optimizing their credit card spend, to end up with maybe a few hundred dollars in savings per year. Working 10 hours of overtime a year nets more and takes less time/mental capacity, for example. But it is fine to do this anyway if you let go of the "I'm saving money" schtick and just embrace that you like maximizing points on spend.
I think if LLMs improved or our usages of them improved to the point we became design/code reviewers full time many of us would leave to do something less boring and so in some ways there is a negative incentive to investigate different AI driven workflows.
Because, like a carpenter doesn't always make the same table, but can be tired of always making tables, I don't always write the exact same CRUD endpoints, but am tired of always writing CRUD endpoints.
I think your analogy shows why LLMs are useful, despite being kinda bad. We need some programming tool to which we can say, "like this CRUD endpoint, but different in this and that". Our other metaprogramming tools cannot do that, but LLMs kinda can.
I think now we have identified this problem (programmers need more abstract metaprogramming tools) and a sort of practical engineering solution (train LLM on code), it's time for researchers (in the nascent field of metaprogramming, aka applied logic) to recognize this and create some useful theories, that will help to guide this.
In my opinion, it should lead to adoption of richer (more modal and more fuzzy) logics in metaprogramming (aside from just typed lambda calculus on which our current programming languages are based). That way, we will be able to express and handle uncertainty (e.g. have a model of what constitutes a CRUD endpoint in an application) in a controlled and consistent way.
This is similar how programming is evolving from imperative with crude types into something more declarative with richer types. (Roughly, types are the specification and the code is the solution.) With a good set of fuzzy type primitives, it would be possible to define a type of "CRUD endpoint", and then answer the question if the given program has that type.
Because in practice the API endpoint isn't what takes up the time or LOC, but what's underneath. In fact, there's plenty of solutions to e.g. expose your database / data storage through an API directly. But that's rarely what you really want.
Leaky abstractions. Lots of meta programming frameworks tried to do this over the years (take out as much crud as possible) but it always ends up that there is some edge case your unique program needs that isn’t handled and then it is a mess to try to hack the meta programming aspects to add what you need. Think of all the hundreds of frameworks that try to add an automatic REST API to a database table, but then you need permissions, domain specific logic, special views, etc, etc. and it ends up just easier to write it yourself.
If you can imagine an evolutionary function of noabstraction -> total abstraction oscilating overtime, the current batch of frameworks like Django and others are roughly the local maxima that was settled on. Enough to do what you need, but doesn’t do too much so its easy to customize to your use case.
Yeah, I don't understand this comparison. I've programmed for years in higher level languages professionally and never learned assembly and never got stuck because the higher level language was limited or doing something wrong.
Whenever I use an LLM I always need to review its output because usually there is something not quite right. For context I'm using VS copilot, mostly ask and agent mode, in a large brownfield project.