Hacker Newsnew | past | comments | ask | show | jobs | submit | jmogly's commentslogin

Like it, a lot. I think the future of software is going to be unimaginably dynamic. Maybe apps will not have statically defined feature sets, they will adjust themselves around what the user wants and the data it has access to. I’m not entirely sure what that looks like yet, but things like this are a step in that direction.

> I think the future of software is going to be unimaginably dynamic.

>...I’m not entirely sure what that looks like yet, but things like this are a step in that direction.

This made me stop and think for a moment as to what this would look like as well. I'm having trouble finding it, but I think there was a post by Joe Armstrong (of Erlang) that talked about globally (as in across system boundaries, not global as in global variable) addressable functions?


Not sure if I've read such an article, but it would be a reasonable next step from the globally addressable processes of the BEAM VM.

As I understand it Unison tries to do something like that but that might be wrong.

https://www.unison-lang.org/


I would say it varies from 0x to a modest 2x. It can help you write good code quickly, but, I only spent about 20-30% of my time writing code anyway before AI. It definitely makes debugging and research tasks much easier as well. I would confidently say my job as a senior dev has gotten a lot easier and less stressful as a result of these tools.

One other thing I have seen however is the 0x case, where you have given too much control to the llm, it codes both you and itself into pan’s labyrinth, and you end up having to take a weed wacker to the whole project or start from scratch.


Ok, if you're a senior dev, have you 'caught' it yet?

Ask it a question about something you know well, and it'll give you garbage code that it's obviously copied from an answer on SO from 10 years ago.

When you ask it for research, it's still giving you garbage out of date information it copied from SO 10 years ago, you just don't know it's garbage.


That's why you dont use LLMs as a knowledge source without giving them tools.

"Agents use tools in a loop to achieve a goal."

If you don't give any tools, you get hallucinations and half-truths.

But you give one a tool to do, say, web searches and it's going to be a lot smarter. That's where 90% of the innovation with "AI" today is coming from. The raw models aren't gettin that much smarter anymore, but the scaffolding and frameworks around them are.

Tools are the main reason Claude Code is as good as it is compared to the competition.


  > The raw models aren't gettin that much smarter anymore, but the scaffolding and frameworks around them are.
yes, that is my understanding as well, though it gets me thinking if that is true, then what real value is the llm on the server compared to doing that locally + tools?


You still can't beat an acre of specialized compute with any kind of home hardware. That's pretty much the power of cloud LLMs.

For a tool use loop local models are getting to "OK" levels, when they get to "pretty good", most of my own stuff can run locally, basically just coordinating tool calls.


Of course, step one is always critically think and evaluate for bad information. I think for research, I mainly use it for things that are testable/verifiable, for example I used it for a tricky proxy chain set up. I did try to use it to learn a language a few months ago which I think was counter productive for the reasons you mentioned.


How can you critically assess something in a field you're not already an expert on?

That Python you just got might look good, but could be rewritten from 50 lines to 5, it's written in 2010-style, it's not using modern libraries, it's not using modern syntax.

And it is 50 to 5. That is the scale we're talking about in a good 75% of AI produced code unless you challenge it constantly. Not using modern syntax to reduce boilerplate, over-guarding against impossible state, ridiculous amounts of error handling. It is basically a junior dev on steriods.

Most of the time you have no idea that most of that code is totally unnecessary unless you're already an expert in that language AND libraries it's using. And you're rarely an expert in both or you wouldn't even be asking as it would have been quicker to write the code than even write the prompt for the AI.


I use web search (DDG) and I don’t think I have ever try more than one queries in the vast majority of cases. Why because I know where the answer is, I’m using the search engine as an index to where I can find it. Like “csv python” to find that page in the doc.


I’ve been using the following pattern since gpt3, the only other thing I have changed was added another parameter for schema for structured output. People really love to overcomplicate things.

class AI(Protocol):

def call_llm(prompt: str) -> str: …


100% Gemini + pydantic you don’t need a wrapper library in 2025


On one hand, I would like to say this could happen to anyone, on the other hand, what the F?? why are people passing around a dataset that contains child sexual abuse material??, and on another hand, I think this whole thing just reeks of techy-bravado, and I don’t exactly blame him. If one of the inputs of your product (OpenAI, google, microsoft, meta, X) is a dataset that you can’t even say for sure does not contain child pornography, that’s pretty alarming.


Eh I’ll take my 78, someone else can have the rest.


Lol! This is probably my sneaky number one productivity benefit from LLMs, I would never want to go back to writing shell scripts pre-llm. So many hours wasted debugging and deciphering stack overflow over the years, dropping &,$, [[]], “”,|,<> in different places hoping to get my .shitty scripts working. Like conceptually I understand shell scripting very well, but anyone nobody can argue that bash isn’t footgun central.


what


Some real honest and actionable advice here. I think the natural course for intelligent people that enjoy crafting things is very much in conflict with the real world. We care about the things we are building because we see them as an extension of ourselves. Anxiously perfecting our creations in a safe place, obsessing over ever smaller details of finished portions; working on detailing while ignoring the missing half of the ship. Its an ego thing. We see these things as pieces of ourselves, we’re afraid that the world won’t accept them, and by extension us. It’s not real though; nothing and nobody is perfect, and its okay.

I have a deep feeling that i can “do it myself”, yet i work for companies because deep down I like the anonymity and the safety of it; at a big company we get to be part of something established, we don’t have to show our own faces to the world.


It's a thing I had to get over, because in the end, people were quite content with what I released and no one minded if I shipped an extra 25kb to the browser or did not have consistently rounded corners.

I learned that lesson again with art. You have to frequently zoom out and see if the entire picture works, or otherwise you make highly detailed turds.


Im also wondering how he is able to see calls to AI providers directly in the browser, client side api calls? Thats strange to me. Also how is he able to peer into the rag architectures? I don’t get that, maybe GpT4.1 allows unauthenticated requests? Is there an OAuth setup that allows client side requests to OpenAI?


Yea I just posted a similar comment. I'm sure some websites just skin OpenAI/Claude etc, but ALL of them? It makes no sense.


Python and Rust are probably the front runners


I think what the grandparent meant was one human language (e.g. English) and one machine language (e.g. Python).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: