Hacker Newsnew | past | comments | ask | show | jobs | submit | gghffguhvc's commentslogin

Wild idea: Could be a symbolic dead man switch.

Reports of FBI going hard after archive.today around the time the HN account was setup and they post an archive.today competitor. Pings on the investigative article then a post to HN saying “3 days ago” which could indicate when FBI succeeded.

The only comment by the poster on this article is a sharp clarification of what doxxing is and isn’t.

Perhaps this is just an unusual way of slowly stepping out from behind the curtain on your own quirky terms after a fantastically long tenure.


My company takes between Christmas and New Years off. I took a week before that off too. I have not used AI in that time. The slower pace of life is amazing. But when I get back to coding it will be back to running at 180%. It’s the new norm. However I’ve decided to take longer “no computer” breaks in my day. I have to adapt but I need to defend my “take it slow” times and find some analogue hobbies. The shift is real and you can’t wind it back.


I’ve been taking my son for stroller walks more often over Christmas. I bring a headset for listening to music, podcasts, audiobooks, tech talks. “Be effective.” But I end up just walking and thinking, realising this is “free time”.

It sounds ridiculous and easy to say spending time walking and thinking will improve your decisions and priorities that no productivity hack will.

I only actually did slow down for a while because I had to for the well-being of my family. Sure feels important to not always be on top of everyone else’s business.


I have healthcare apps. The review process for me consists of some reviewer deciding what set of healthcare features I should have picked from their list and rejecting on that basis. But subsequent reviewers have different opinions. In one app version release I got rejected 5 times for picking the wrong set of healthcare features as either the reviewer changed their mind or I got different reviewers. The app has been on Google play for 13 years.


The pilots might have reassessed after Pakistan seemed to have shot three of them down from over 200km range. Intel failure blamed but likely many factors of which some presumably may be attributed to the planes.


Pakistan has never downed an F-35.


They were talking about the Rafales. But I think the comment is irrelevant anyway as the scandal happened before that iirc.


I poorly worded it. Rafales allegedly shot down. After that happened, perhaps the pilots wanting them over F35s might have a different opinion. F35s might be harder to get a lock on at that distance and might have better situational awareness capabilities.


For the same quality and quantity output, if the cost of using LLMs + the cost of careful oversight is less than the cost of not using LLMs then the rational choice is to use them.

Naturally this doesn’t factor in things like human obsolescence, motivation and self-worth.


It seems like this would be a really interesting field to research. Does AI assisted coding result in fewer bugs, or more bugs, vs an unassisted human?

I've been thinking about this as I do AoC with Copilot enabled. It's been nice for those "hmm how do I do that in $LANGUAGE again?" moments, but it's also wrote some nice looking snippets that don't do quite what I want it to. And many cases of "hmmm... that would work, but it would read the entire file twice for no reason".

My guess, however, is that it's a net gain for quality and productivity. Humans make bugs too and there need to be processes in place to discover and remediate those regardless.


I'm not sure about research, but I've used LLMs for a few things here at Oxide with (what I hope is) appropriate judgment.

I'm currently trying out using Opus 4.5 to take care of a gnarly code reorganization that would take a human most of a week to do -- I spent a day writing a spec (by hand, with some editing advice from Claude Code), having it reviewed as a document for humans by humans, and feeding it into Opus 4.5 on some test cases. It seems to work well. The spec is, of course, in the form of an RFD, which I hope to make public soon.

I like to think of the spec is basically an extremely advanced sed script described in ~1000 English words.


Maybe it's not as necessary with a codebase as well-organized as Oxide's, but I found gemini 3 useful for a refactor of some completely test-free ML research code, recently. I got it to generate a test case which would exercise all the code subject to refactoring, got it to do the refactoring and verify that it leads to exactly the same state, then finally got it to randomize the test inputs and keep repeating the comparison.


This companies have trillions and they are not doing that research. Why?


I don't know. I guess the flip side applies too? Lots of people arguing either side, when it feels like it shouldn't be that difficult to provide some objective data.


And it doesn't factor seniority/experience. What's good for a senior developer is not necessarily same for a beginner


I just whack-a-mole these things in AGENTS.md for a while until it codes more like me.


Coding LLMs were almost useless for me, until my AGENTS.md crossed some threshold of completeness and now they are mostly useful. I now curate multiple different markdown files in a /docs folder, that I add to the context as needed. Any time the LLM trips on something and we figure it out, then I ask it to document it's learnings in a markdown doc, and voila it can do it correctly from then on.


When I lived in SF I walked past this street art a couple of times a week and got a smile.

https://www.sfstairways.com/stairways/eugenia-avenue-prospec...


Ha, that’s great!


As a co-founder and dev at a bootstrapped company I’d say AI has and will slow developer hiring rate. We’re just more productive and on top of things more.

We’ve also reduced the hours we work per week. We care about getting things done not time behind a screen.


Sure AI can build cute POCs. Will it build scaled solutions, not this year. The amount of ignorance in this post is precisely why the industry is so rattled. Gen AI tools are great, they are not making people orders of magnitude more productive.


> Will it build scaled solutions, not this year.

That is not true IMHO.

If one is expecting Lovable to create a production app by just giving a few prompts, that obviously is not going to happen, not now and most probably for a long time.

However, if you use Claude Code or one of the proper IDEs, you can definitely guide it step by step and build production quality code, actually code that may even be better than most software engineers out there.

Moreover, these tools allow you to take your proficiency in software dev and specific languages/frameworks to other languages/frameworks without being an expert in them, and that I think is a huge win in itself.


I work in big tech as a senior engineer. I’m aware of what’s out there and none of it is solving problems in a way that’s replacing swathes of engineers anytime soon.

It may be an excuse to layoff but it’s not ramping up velocity in ways that PR is making it seem to non tech literate.


I never said anything about swathes of engineers, merely that it is possible to build production quality stuff.

From my experience, these are better suited at the moment for small teams and new projects. It’s unclear to me how they’ll work in large team/massive legacy code situations. Teams will have to experiment and come up with processes that work. IMHO anyway.


We’ve been in business 15 years. These aren’t POCs. Even at say 20% productivity boost I feel way ahead to give devs 9 day fortnights and soon hopefully 4 day weeks.


how is it both a bootstrapped company slow to hiring devs (due to AI) and also a company that's been in business 15 years? if you were going to hire devs to scale out, you would've done it 10 years ago?


Slowed the rate of hiring devs.

Normally as we add enterprise customers we have to dedicate more dev resource keeping them happy. But since Claude code and now codex we have not felt that feeling of not being on top of the work. Thus not feeling the need to hire more devs.


The number of people we are talking about as a percentage of CS like graduates is tiny. They aren’t kids either. It seems like a low risk experiment on both sides.


Paying tax isn’t a big company vs small company thing. A single company not paying tax isn’t that bad directly but it can be infectious. “They don’t pay neither should I” attitude is a problem.


I know, but then you look at effective tax paid by big companies and it makes you wonder.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: