Hacker Newsnew | past | comments | ask | show | jobs | submit | yourapostasy's commentslogin

> They have food and housing, but their life is devoid of meaning.

I find it difficult to relate to such worlds. I make up all kinds of explanations like, "well, it must be because while they have food and housing, they don't have any funds to entertain themselves". Or, "well, it must be because they simply haven't had sufficient education to reach an activation level where the higher tiers of Maslow's come into their line of sight".

And then I read about plenty of counter-examples, like wealthy offspring living the textbook aimless/dissolute/pick-your-adjective life, or the ennui of able-bodied welfare recipients with quite reasonable spending cash from generous Scandinavian welfare regimes when one considers the mind boggling amount of free media, free libraries, free parks, free entertainment in general in the developed world. Perhaps this is just part of their human condition for people suffering from this malaise.

And here I sit, drowning in ideas of what I would be interested to pursue to know our beautiful universe if only I had the time. So much so I write them down into a file just to quiet the cacophony in my head like a dog seeing squirrels everywhere he looks, just so I can get real work done on a timely basis, haha.

When once asked whether I'd ever be bored with eternal youth and boundless resources, I immediately replied an eternity is still too little time to satisfy my curiosity.


They lack curiosity. It can be nurtured, or starved.

I wonder whether we're trending towards a high-sensor variation of "A Young Lady's Illustrated Primer" / Vannevar Bush's Memex that ingests the details of a user's daily life (the smart glasses being a primitive first example products of such) and identifies salient information in their lives can help us perform mass customization of instructions into direct prescriptives, with backing evidentiary data for SME's. Instead of "if X Y and Z then do A, if only X do B", the interaction becomes "do this, anticipate that outcome" to the user, and if an SME (a doctor in your example) asks about it, the system recalls and presents all the factors that went into deciding upon the specific prescriptive.

Reminds me of a post I read a few days ago of someone crowing about an LLM writing for them an email format validator. They did not have the LLM code up an accompanying send-an-email-validation loop, and were blithely kept uninformed by the LLM of the scar tissue built up by experience in the industry on how curiously a deep rabbit hole email validation becomes.

If you’ve been around the block and are judicious how you use them, LLM’s are a really amazing productivity boost. For those without that judgement and taste, I’m seeing footguns proliferate and the LLM’s are not warning them when someone steps on the pressure plate that’s about to blow off their foot. I’m hopeful we will this year create better context window-based or recursive guardrails for the coding agents to solve for this.


Yeah I love working with Claude Code, I agree that the new models are amazing, but I spend a decent amount of time saying "wait, why are we writing that from scratch, haven't we written a library for that, or don't we have examples of using a third party library for it?".

There is probably some effective way to put this direction into the claude.md, but so far it still seems to do unnecessary reimplementation quite a lot.


This is a typical problem you see in autodidacts. They will recreate solutions to solved problems, trip over issues that could have been avoided, and generally do all of things you would expect someone to do if they are working with skill but no experience.

LLMs accelerate this and make it more visible, but they are not the cause. It is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.


I am hopeful autodidacts will leverage an LLM world like they did with an Internet search world from a library world from a printed word world. Each stage in that progression compressed the time it took for them to encompass a span of comprehension of a new body of understanding before applying to practice, expanded how much they applied the new understanding to, and deepened their adoption scope of best practices instead of reinventing the wheel.

In this regard, I see LLM's as a way for us to way more efficiently encode, compress, convey and enable operational practice our combined learned experiences. What will be really exciting is watching what happens as LLM's simultaneously draw from and contribute to those learned experiences as we do; we don't need full AGI to sharply realize massive benefits from just rapidly, recursively enabling a new highly dynamic form of our knowledge sphere that drastically shortens the distance from knowledge to deeply-nuanced praxis.


> [The cause] is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.

Isn't that what "using an LLM" is supposed to solve in the first place?


With the right prompt the LLM will solve it in the first place. But this is an issue of not knowing what you don't know, so it makes it difficult to write the right prompt. One way around this is to spawn more agents with specific tasks, or to have an agent that is ONLY focused on finding patterns/code where you're reinventing the wheel.

I often have one agent/prompt where I build things but then I have another agent/prompt where their only job is to find codesmells, bad patterns, outdated libraries, and make issues or fix these problems.


1. LLMs can't watch over someone and warn them when they are about to make a mistake

2. LLMs are obsequious

3. Even if LLMs have access to a lot of knowledge they are very bad at contextualizing it and applying it practically

I'm sure you can think of many other reasons as well.

People who are driven to learn new things and to do things are going to use whatever is available to them in order to do it. They are going to get into trouble doing that more often than not, but they aren't going to stop. No is helping the situation by sneering at them -- they are used it to it, anyway.


My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.

> My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.

Lol, who doesn't hate that?


I don't know, in 40 years codding I never had to ask a question there.

So literally everyone in the world? Yeah, seems right!

I would love to see your closed SO questions.

But don't worry, those days are over, the LLMs it is never going to push back on your ideas.


lol, I probably don't have any, actually. If I recall, I would just write comments when my question differed slightly from one already there.

But it's definitely the case that being able to go back and forth quickly with an LLM digging into my exact context, rather than dealing with the kind of judgy humorless attitude that was dominant on SO is hugely refreshing and way more productive!


> ...multiple engineers argued about the "right" way to build something. I remember thinking that they had biases based on past experiences and assumptions about what mattered.

I usually resolve this by putting on the table the consequences and their impacts upon my team that I’m concerned about, and my proposed mitigation for those impacts. The mitigation always involves the other proposer’s team picking up the impact remediation. In writing. In the SOP’s. Calling out the design decision by day of the decision to jog memories and names of those present that wanted the design as the SME’s. Registered with the operations center. With automated monitoring and notification code we’re happy to offer.

Once people are asked to put accountable skin in the sustaining operations, we find out real fast who is taking into consideration the full spectrum end to end consequences of their decisions. And we find out the real tradeoffs people are making, and the externalities they’re hoping to unload or maybe don’t even perceive.


That's awesome, but I feel like half the time most people aren't in the position to add requirements so a lot of shenanigans still happens, especially in big corps

I am satisfied when someone tells us we cannot change requirements, to get their acknowledgement that what we bring up does extract a specific trade-off, and their reason for accepting the trade-off, then recording it into design and operational documentation. The moment many people recognize this trade-off will be explicitly documented with their and their team's accountability in detail, is when you surface genuine trade-offs made with the debt to pay off in the future in mind and in the meantime a rationale to grant a ton of leeway to the team burdened with the externality going forward, and trade-offs made without understanding their externalities upon other teams (which happens a tremendous amount in large organizations).

Most of the time, people are just very reasonably and understandably focusing tightly on their lane and honestly had no idea of the externalities of their conclusions and decisions, and I'm happy to have experienced all those times a rebalancing of the trade-offs that everyone can accept and is grateful to have documented to justify spending the story points upon cleaning up later instead of working on new features while the externality debt's unwanted impact keeps piling up.

In fewer than a handful of times, I run into people deliberately, consciously with malice aforethought of the full externalities making trade-offs for the sake of expediently shifting burdens of of them without first consulting with partner teams they want to shift the burdens onto, simply so they can fatten their promo packet sooner at the expense of making other teams look worse. Getting these trade-offs documented about half the time makes them back down to a more reasonable trade-off, about half the time they don't back down but your team is now protected by explicit documentation and caveats upon the externality your team now has to carry, and 100% of the time my team and I put a ring fence upon all future interactions with that personality for at least the remaining duration of my gig.


We could have LLM’s capable of doing all that for your pastor right now and it would still take time before these systems can effectively reason through troubleshooting this bespoke software. Right now the effectiveness of LLLM-powered troubleshooting software platforms relies upon the gravity induced by millions of programmers sharing experiences upon more or less the same platforms. Gigabytes to terabytes of text training data on all sorts of things that go bonkers on each platform.

We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.

Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.

Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.


GitHub back in September already published their roadmap of mitigations to NPM supply chain attacks:

https://github.blog/security/supply-chain-security/our-plan-...

I'm guessing no one yet wants to spend the money it takes for centralized, trusted testing where the test harnesses employ sandboxing and default-deny installs, Deterministic Simulated Testing (DST), or other techniques. And the sheer scale of NPM package modifications per week makes human in the loop-based defense daunting, to the point that only a small "gold standard" subset of packages that has a more reasonable volume of changes might be the only palatable alternative.

What are the thoughts of those deep inside the intersection of NPM and cybersecurity?


You would need to hear the thoughts of those deep inside the intersection of money and money.


This feels to me more like the kind of Augmented Reality (AR) that will make it to mass market adoption than what the market has offered to date. Granted, audio-only, but that's where all our wearable tech seems to start (likely because of the energy physics involved with how our tech currently generates artificial perceptual signals).


I'm guessing they aren't widespread yet in doctors' offices due to cost. As near as I can tell, the results are acceptable for clinical settings?


Ever since Framework laptops arrived on the scene, I've been waiting for someone to create thicker bezels that piggyback onto the webcams' USB 2.0 interface via a USB 2.0 hub, to integrate a color eInk display facing outwards. Just for the infinite stickers.

Bonus points for integrating an outward-facing webcam dedicated to a continous background facial recognition daemon to change the stickers on the fly depending upon who is approaching while the laptop is running.


Some organizations’ leadership takes one look at the cost of redundancy and backs away. Paying for redundant resources most organizations can stomach. The network traffic charges are what push many over the edge of “do not buy”.

The cost of re-designing and re-implementing applications to synchronize data shipping to remote regions and only spinning up remote region resources as needed is even larger for these organizations.

And this is how we end up with these massive cloud footprints not much different than running fleets of VM’s. Just about the most expensive way to use the cloud hyperscalers.

Most non-tech industry organizations cannot face the brutal reality that properly, really leveraging hyperscalers involves a period of time often counted in decades for Fortune-scale footprints where they’re spending 3-5 times on selected areas more than peers doing those areas in the old ways to migrate to mostly spot instance-resident, scale-to-zero elastic, containerized services with excellent developer and operational troubleshooting ergonomics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: