Hacker Newsnew | past | comments | ask | show | jobs | submit | jonstewart's commentslogin

I’m reading your comment more sardonically about the state of US manufacturers, but globally I don’t think this holds up. Some of the most expensive vehicles—trucks and SUVs—have the worst mileage. Often the cheaper a new car is, the better gas mileage it gets.

This dynamic might not hold as consistently with used cars but it’s not entirely eliminated, either.


What does this have to do with mileage? It's about affording to pay for the damn car itself.

Compare the same category of car in gasoline and EV versions. See how the EV adds 10k to the price.

Not much by silicon valley latte standards of course. A lot by "i can barely afford a barebones renault" standards though.


    Compare the same category of car in gasoline and EV versions.
Have you looked at the bulk of cars people are buying new? They're $60K+ trucks and SUVs. Those same people could be buying EVs today.

I did say something about SV lattes didn't I? Not everyone is over there.

Plus how much is the EV version of a 60k usd truck? 75k? More?


Go is an extremely cynical language in this regard.

I’m looking forward to when people realize that the agents stay more focused when feedback is provided more frequently, with adjustments to the spec made after every round of feedback, i.e., agile.

Doesn’t seem like slow-walking so much as a mad-rush. I saw a Kalshi ad on tv last night.

Exactamundo.

I dunno monied interests leveraging newfound scale of propaganda through internet monopolies might also bear some of the blame.

“It’s the memory, stupid!” So wrote Richard Sites, lead designer of the famous DEC Alpha chip, in 1996 (http://cva.stanford.edu/classes/cs99s/papers/architects_look...). It’s rung true for 30 years.

Where C application code often suffers, but by no means always, is the use of memory for data structures. A nice big chunk of static memory will make a function fast, but I’ve seen many C routines malloc memory, do a strcpy, compute a bit, and free it at the end, over and over, because there’s no convenient place to retain the state. There are no vectors, no hash maps, no crates.io and cargo to add a well-optimized data structure library.

It is for this reason I believe that Rust, and C++, have an advantage over C when it comes to writing fast code, because it’s much easier to drop in a good data structure. To a certain extent I think C++ has an advantage over Rust due to easier and better control over layout.


I'd certainly agree that malloc is the Achilles heel of any real world C. Overall though C++ was not a particularly good solution to memory efficiency since having OO available made the situation look like a fast sprint to the cake shop.

Heavy smalltalk-style OOP in C++ has kind of died out, especially with data structures. So with any templated data structure you’re reducing indirection from vtables and you have the opportunity to allocate however you want, often in continuous slabs to ease memory transfer and caching.

There should be luminaries to whom this does not apply, and tridge is certainly among that small pantheon.


It can do this at the level of a function, and that's -useful-, but like the parent reply to top-level comment, and despite investing the time, using skills & subagents, etc., I haven't gotten it to do well with C++ or Rust projects of sufficient complexity. I'm not going to say they won't some day, but, it's not today.


Anecdotally, we use Opus 4.5 constantly on Zed's code base, which is almost a million lines of Rust code and has over 150K active users, and we use it for basically every task you can think of - new features, bug fixes, refactors, prototypes, you name it. The code base is a complex native GUI with no Web tech anywhere in it.

I'm not talking about "write this function" but rather like implementing the whole feature by writing only English to the agent, over the course of numerous back-and-forth interactions and exhausting multiple 200K-token context windows.

For me personally, definitely at least 99% all of the Rust code I've committed at work since Opus 4.5 came out has been from an agent running that model. I'm reading lots of Rust code (that Opus generated) but I'm essentially no longer writing any of it. If dot-autocomplete (and LLM autocomplete) disappeared from IDE existence, I would not notice.


Woah that's a very interesting claim you made I was shying away from writing Rust as I am not a Rust developer but hearing from your experience looks like claude has gotten very good at writing Rust


Honestly I think the more you can give Claude a type system and effective tests, the more effective it can be. Rust is quite high up on the test strictness front (though I think more could be done...), so it's a great candidate. I also like it's performance on Haskell and Go, both get you pretty great code out of the box.


Have you ever worried that by programming in this way, you are methodically giving Anthropic all the information it needs to copy your product? If there is any real value in what you are doing, what is to stop Anthropic or OpenAI or whomever from essentially one-shotting Zed? What happens when the model providers 10x their costs and also use the information you've so enthusiastically given them to clone your product and use the money that you paid them to squash you?


Zed's entire code base is already open source, so Anthropic has a much more straightforward way to see our code:

https://github.com/zed-industries/zed


That's what things like AWS bedrock are for.

Are you worried about microsoft stealing your codebase from github?


Isn’t it widely assumed Microsoft used private repos for LLM training?

And even with a narrower definition of stealing, Microsoft’s ability to share your code with US government agencies is a common and very legitimate worry in plenty of threat model scenarios.


I just uninstalled Zed today when I realized the reason I couldn't delete a file on Windows because it was open in Zed. So I wouldn't speak too highly of the LLM's ability to write code. I have never seen another editor on Windows make the mistake of opening files without enabling all 3 share modes.


Just based on timing, I am almost 100% sure whatever code is responsible was handwritten before anyone working on Windows was using LLMs...but anyway, thank you for the bug report - I'll pass it along!


The article is arguing that it will basically replace devs. Do you think it can replace you basically one-shotting features/bugs in Zed?

And also - doesn’t that make Zed (and other editors) pointless?


Trying to one-shot large codebases is a exercise in futility. You need to let Claude figure out and document the architecture first, then setup agents for each major part of the project. Doing this keeps the context clean for the main agent, since it doesn't have to go read the code each time. So one agent can fill it's entire context understanding part of the code and then the main agent asks it how to do something and gets a shorter response.

It takes more work than one-shot, but not a lot, and it pays dividends.


Is there a guide for doing that successfully somewhere? I would love to play with this on a large codebase. I would also love to not reinvent the wheel on getting Claude working effectively on a large code base. I don’t even know where to start with, e.g., setting up agents for each part.


> Do you think it can replace you basically one-shotting features/bugs in Zed?

Nobody is one-shotting anything nontrivial in Zed's code base, with Opus 4.5 or any other model.

What about a future model? Literally nobody knows. Forecasts about AI capabilities have had horrendously low accuracy in both directions - e.g. most people underestimated what LLMs would be capable of today, and almost everyone who thought AI would at least be where it is today...instead overestimated and predicted we'd have AGI or even superintelligence by now. I see zero signs of that forecasting accuracy improving. In aggregate, we are atrocious at it.

The only safe bet is that hardware will be faster and cheaper (because the most reliable trend in the history of computing has been that hardware gets faster and cheaper), which will naturally affect the software running on it.

> And also - doesn’t that make Zed (and other editors) pointless?

It means there's now demand for supporting use cases that didn't exist until recently, which comes with the territory of building a product for technologists! :)


Thanx. More of a "faster keyboard" so far then?

And yeah - if I had a crystal ball, I would be on my private island instead of hanging on HN :)


Definitely more than a faster keyboard (e.g. I also ask the model to track down the source of a bug, or questions about the state of the code base after others have changed it, bounce architectural ideas off the model, research, etc.) but also definitely not a replacement for thinking or programming expertise.


I don't know if you've tried Chatgpt-5.2 but I find codex much better for Rust mostly due to the underlying model. You have to do planning and provide context, but 80%+ of the time it's a oneshot for small-to-medium size features in an existing codebase that's fairly complex. I honestly have to say that it's a better programmer than I am, it's just not anywhere near as good a software developer for all of the higher and lower level concerns that are the other 50% of the job.

If you have any opensource examples of your codebase, prompt, and/or output, I would happily learn from it / give advice. I think we're all still figuring it out.

Also this SIMD translation wasn't just a single function - it was multiple functions across a whole region of the codebase dealing with video and frame capture, so pretty substantial.


"I honestly have to say that it's a better programmer than I am, it's just not anywhere near as good a software developer for all of the higher and lower level concerns that are the other 50% of the job."

That's a good way to say it, I totally identify.


Is that a context issue? I wonder if LSP would help there. Though Claude Code should grep the codebase for all necessary context and LSP should in theory only save time, I think there would be a real improvement to outcomes as well.

The bigger a project gets the more context you generally need to understand any particular part. And by default Claude Code doesn't inject context, you need to use 3rd party integrations for that.


This is a terrible practice. If you have a septic system, you’re screwing yourself. If you’re municipal, you’re screwing your city. Compost your compost, don’t dump it down the drain.


Curious where you draw the line. What is even the point of having a garage disposer if you're not supposed to use it? I'm not disagreeing with your overall point. I'm geninely curious since I think every apartment I've ever lived in in the USA had a disposer so I'd expect them to be used and to a degree, if the disposer can chop up the stuff small enough to be safe, why not put things in it?

Note that disposers are not common in some countries so I've lived for half my adult live without one. Typically those countries have a basket in the center of the sink to catch stuff and then you empty that basket into a bag. People also find various sink attachments to hold a larger bag for bigger waste while they cook.


The older I've gotten, the more I've learned to scrape my plate clean into the trash. Then I rinse it, then I put it into the dishwasher. I'll run the garbage disposal as I rinse plates and pots and pans. This way I don't have to clean carrot peels and disgustingness out of a clogged drain catch, and don't need to clean my dishwasher filter frequently.

When you have to do your own maintenance, your habits change. The disposal isn't for making things go away magically, it's to help keep your drain from clogging.


I do my own maintenance, throw everything in the disposal, and have never had an issue. You just need a better disposal.


How is using a disposal screwing the city? Why isn’t there something in my water bill telling me not to? They’ve got lots of little inserts telling me not to do other stuff.

I don’t see how it’s that different than flushing the toilet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: