Hacker Newsnew | past | comments | ask | show | jobs | submit | leonidasv's commentslogin

I use Claude Code daily to work on a large Python codebase and I'm yet to see the it hallucinating a variable or method (I always ask it to write and run unit tests, so that may be helping). Anyway, I don't think that's a problem at all, most problems I face with AI-generated code are not solved by a borrow-checker or a compiler: bad architecture, lack of forward-thinking, hallucinations in the contract of external API calls, etc.

There's also https://shademap.app/ for that, also useful (with 3D buildings!). Used it before buying my condo and it was spot-on.

Any way to run that on a Mac (besides running it in the browser)?

  $ ./snake.com
  ./snake.com: line 20: /tmp/a: cannot execute binary file


Unless we figure out how to make 1 billion+ tokens multimodal context windows (in a commercially viable way) and connect them to Google Docs/Slack/Notion/Zoom meetings/etc, I don't think it will simplify that much. Most of the work is adjusting your mental model to the fact that the agent is a stateless machine that starts from scratch every single time and has little-to-no knowledge besides what's in the code, so you have to be very specific about the context of the task in some ways.

It's different from assigning a task to a co-worker who already knows the business rules and cross-implications of the code in the real world. The agent can't see the broader picture of the stuff it's making, it can go from ignoring obvious (to a human that was present in the last planning meeting) edge cases to coding defensively against hundreds of edge cases that will never occur, if you don't add that to your prompt/context material.


What profit? All labs are taking massive losses and there's no clear path to profit for most of them yet.


The wealthiest people in tech aren't spending 10s of billions on this without the expectation of future profits. There's risk, but they absolutely expect the bets to be +EV overall.


Expected profit.


You don't need a LLM for that, a simple Markov Chain can solve that with a much smaller footprint.


> While some are still discussing why computers will never be able to pass the Turing test

Are people still debating that? I thought it was settled by the time GPT-4 came out.


There is no the Turing test. I think ELIZA was the first program to pass a Turing test, around 60 years ago.


I have an idea for a reverse turing test where humans have to convince an LLM that they are an LLM. I suspect that most people would fail, proving that humans lack intelligence.


You're absolutely right.


You are right to push back on that.


There's a trick shared here days ago to add a kind of shebang to Go that may interest you: https://lorentz.app/blog-item.html?id=go-shebang

Discussion: https://news.ycombinator.com/item?id=46431028


I used to watch archived episodes of Computer Chronicles on YouTube almost every night before going to bed back in 2016~2018. It was my bedtime entertainment, watching those recordings from another era of computing and observing the hosts' enthusiasm for things we take for granted today. As a late millennial, it helped me experience a bit of what the 80s and 90s were like in computing.

RIP Stewart.


As an early Millennial it helped me remember a lot of my childhood in computing and robotics


It costs up to ~$9m per year, processing 63 billion transactions per year: https://oglobo.globo.com/economia/noticia/2024/10/03/bc-gast...

Banks pay a small fee (less than a cent) per batch of transactions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: