They’re predatory scavengers that wouldn’t hesitate to eat you if it had the opportunity. I would much rather conduct biomedical research on sharks than mice or rats.
Ah yes potentially getting us one step closer to immortality, hardly worth killing an animal!
I mostly eat vegan because I do have a strong dislike of factory farming and the way animals are treated there. But killing animals is a fact of life and I think scientific progress is a very valid reason to do so.
To put it in perspective, a lot of shark young will kill each other in the womb such that only the strongest is birthed. These animals eat other animals alive, etc.. etc.. My point being it is not like the option is between a rosy utopia or human-inflicted suffering.
I'm not against scientific research per se or living a bit more but... is immortality (or living for, say, 200 years or more) really something we should strive for?
Many aspects of human society assume, one way or another, that our life expectancy is fairly limited. From politics (even absolute monarchs or dictators eventually die), to economics (think about retirement, for example), demographics (if everyone is immortal and everyone keeps having children, what happens?), even psychology ("everything passes").
Are we willing to throw these implications away? What would be the purpose?
> Many aspects of human society assume, one way or another, that our life expectancy is fairly limited
Assumptions can change. Each of our technological shifts was more upending than longer healthspans would be—most of the West is already a gerontocracy.
That’s throwing the baby with the bathwater, there’s hundreds ways to die not horribly. And for an "immortal" (as in "not-aging"), there’s still ways to die horribly.
Life is more beautiful when you live it for its experiences, not for the fear of loosing it.
> throwing the baby with the bathwater, there’s hundreds ways to die not horribly
The baby in your analogy being aging?
> there’s still ways to die horribly
Sure. The purpose would be remove a common cause of dying horribly.
(And in no world with longevity treatments would it be mandatory. People and populations who like aging and Alzheimer’s can keep partying like it’s 2025.)
> Assumptions can change. Each of our technological shifts was more upending than longer healthspans would be—most of the West is already a gerontocracy.
Sure but is gerontocracy a good thing, then? I’m not against older people, but shifting the whole demographic towards them is not looking good for retirement, social constructs, and more. Immortality would bring this even further, especially when meant literally.
> > What would be the purpose?
To not die horribly.
Well ok, but even if you can’t die horribly (ignoring murders,…) you can still suffer horribly, physically or otherwise, for a variety of reasons. Starving, rape, physical and psychological abuse, painful diseases even if non lethal,… still exist regardless of immortality. It’s not like immortal people are necessarily happy or good.
> shifting the whole demographic towards them is not looking good for retirement, social constructs, and more
I'm genuinely not seeing the problem. Longer lives means more productive lives. (A massive fraction of healthcare costs are related to obesity and aging. A minority of medicine is in trauma.)
> Immortality would bring this even further, especially when meant literally
We don't have a path to entropy-defying immortality. Not aging doesn't mean literal immortality.
> you can still suffer horribly, physically or otherwise, for a variety of reasons
The fact that you're levying this argument should seal the case. It's an argument that can be made against anything good.
Yes, of course it can be made against anything good, but what I mean is… is death truly the worst thing? Isn’t it better to focus on other ways to reduce suffering? Unexpected death is of course tragic, but everything eventually stops. I understand looking into ways to treat diseases, reduce other unpleasant events and possibly reduce pain (physical or otherwise), but immortality to me looks like something you (a generic you) just for the sake of it. Also because, when you think about it, you only die once, but you experience suffering in a variety of ways. In addition, death is a way to “enforce” change. Sometimes it’s bad, other times it’s good.
> Longer lives means more productive lives.
When you work until you’re, say, 80, what happens? You have less time to enjoy some rest, you still do your work (which means, if everything else stays equal, that there is less room for people taking your job and gaining experience because you are as productive as always).
> Isn’t it better to focus on other ways to reduce suffering?
Why?
> ways to treat diseases
Aging underlies tons of diseases. (It’s similar to obesity in that way.)
> death is a way to “enforce” change. Sometimes it’s bad, other times it’s good
This is true of everything bad. You could use this logic for ceasing research into curing cancer, trauma medicine or seatbelts and traffic lights.
> of course it can be made against anything good
Which makes it a pointless argument. (And implicit concession that you’re arguing against something good.)
> When you work until you’re, say, 80, what happens? You have less time to enjoy some rest
…why? You have more time.
In a world without aging, retirement at 80 would be an objectively better deal than retiring at 60 today. You’d be retiring with a body that hasn’t started failing. And you’d have more years, on average, ahead of you.
> there is less room for people taking your job and gaining experience because you are as productive as always
Lump of labour fallacy. (Average adult lifespans have gone up over the last two centuries. That has accompanied more, not less, labour-market dynamism.)
Must be nice to have unquota’ed tokens to use with frontier AI (is this the case for Anthropic employees?). One thing I think is fascinating as we enter the Intellicene is the disproportionate access to AI. The ability to petition them to do what you want is currently based on monthly subscriptions, but will it change in the future? Who knows?
It would be funny if the company paying software engineers $500K or more along with gold-plated stock options was limiting how much they could use the software their company was developing.
Why is that funny? What company gives you unlimited resources? That doesn’t scale. Google employees can’t just demand a $10,000 workstation. It’s reasonable to assume they have some guardrails, for both financial and stability reasons. Who knows… if it’s unlimited now, will it stay that way forever? Probably unlimited in the same sense as unlimited pto.
> Why is that funny? What company gives you unlimited resources?
Anthropic has raised tens of billions of dollars of funding.
Their number of employees is in the thousands. This isn't like Google.
Claude Code is what they're developing. The company is obviously going to encourage them to use it as much as possible.
Limiting how much the Claude Code lead can use Claude Code would be funny because their lead dev would have to stop mid-day and wait for his token quota window to reset before he can continue developing their flagship coding product. Not going to happen.
I'm strangely fascinated by the reaction in the comments, though. A lot of people here must have worked in oddly restrictive corporate environments to even think that a company like this would limit how much their own employees can use the company's own product to develop their own product.
I can't get a $10k workstation but if I used $10k/month on cloud compute it'd take a few months for anyone to talk to me about it and as long as I was actually using it for work purposes I wouldn't run into any consequences more severe than being told to knock it off if I couldn't convince people it was worth the cost.
If an employee has a business need for a $10k workstation, I'm fairly certain they'll get a $10k workstation.
Yes, accounting still happens. Guardrails exist. But quibbling over 2% of a SWEs salary if it's clear that the productivity increase will be significantly more than 2% would be... not a wise use of anybody's time.
Google gives most of their engineers access to machines that would cost that much. If you’re working on specific projects (e.g. Chrome) you can request even more expensive machines.
If it takes a lot of back and forth it between lots of people it is more like a $12000 workstation or more after the labor for requesting and approving.
When you work for the company supplying those tokens and you're working on the product that sells those tokens at scale, the company will let you use as many tokens as you want.
Pretty sure I have seen them imply in one of the panel discussions on their YouTube channel (can't remember which) that they get unlimited use of the best models. I remember them talking about records for the most spent in a day or something.
Pretty sure that was scientists competing for 6 month training runs of new 100B+ parameter models, not coders burning through a couple of million tokens.
It is the case that Anthropic employees have no usage limits.
Some people do experiments where they spawn up hundreds of Claude instances just to see if any of them succeed.
I’m not sure this is correct. The DOM class HTMLDetailsElement has the `open` property, which you can use to read/write the details element’s state. If you’re using setAttribute/getAttribute just switch to the property.
Having to use the property on the element instance, rather than the actual HTML attribute, is exactly the kind of wrapper code I want to avoid if I'm using a built-in.
You need some JS to change an attribute as much as you need JS to change a property. What am I missing?
I hope the command attribute (https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...) will eventually support this out of the box. In the meanwhile you can write a single custom --toggle command which is generic and works with any toggleable element
`open` works just like checked for a checkbox input. You can set the initial state in HTML and CSS can respond to changes in the attribute if a user clicks it. Markup won't have programmatic control over the attribute by design, that's always done in JS by modifying the element instance.
I have a similar workflow except I haven’t put time into the tooling - Claude is adept at TMUX and it can almost even prompt and respond to ChatGPT except it always forgets to press Enter when it sends keys. Have your agents been able to communicate with each other with tmux send-keys?
I had the same issue. Subagents are nice but the LLM calling them can’t have a back and forth conversation. I tried tmux-cli and even other options like AgentAPI[0] but the same issue persists, the agent can’t have a back and forth with the tmux pane.
To people asking why would you want Claude to call Codex or Gemini, it’s because of orchestration. We have an architect skill we feed the first agent. That agent can call subagents or even use tmux and feed in the builder skill. The architect is harnessed to a CRUD application just keeping track of what features were built already so the builder is focused on building only.
I find that asking Claude to develop and Codex to review the uncommitted changes will typically result in high-value code, and eliminate all of Claude’s propensity to perpetually lie and cheat. Sometimes I also ideate with Claude and then ask Claude to get ChatGPT’s opinion on the matter. I started by copy-pasting responses but I found tmux to be a nice way to get rid of the middleman.
What does tmux add here? Or how does it allow you to do that? I’m sorry I’m just missing it I’m sure. I don’t use tmux a lot so I don’t know all its potential.
I’ve been heads-down on publishing a JavaScript full-stack metaframework before the end of the year. However, in the past two weeks I’ve been goaded by Claude Code to extract and publish a separate database client because my vision includes Django-style admin/forms. The idea is to use Zod to define tables, and then use raw SQL fragments with JavaScript template tags. The library adds a tiny bit of magic for the annoying parts of SQL like normalizing join objects, taking care of SELECT clauses and validating writes.
I’m only using it internally right now, but I think this approach is promising. Zod is fantastic for this use-case, and I’m sad I’ve only just discovered it.
My guess is this is a subset of the halting problem (does this program accept data with non-halting decompression), and is therefore beautifully undecidable. You are free to leave zip/tgz/whatever fork bombs as little mines for live-off-the-land advanced persistent threats in your filesystems.
Almost no scifi has predicted world changing "qualitative" changes.
As an example, portable phones have been predicted. Portable smartphones that are more like chat and payment terminals with a voice function no one uses any more ... not so much.
The Machine Stops (https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...), a 1909 short story, predicted Zoom fatigue, notification fatigue, the isolating effect of widespread digital communication, atrophying of real-world skills as people become dependent on technology, blind acceptance of whatever the computer says, online lectures and remote learning, useless automated customer support systems, and overconsumption of digital media in place of more difficult but more fulfilling real life experiences.
It's the most prescient thing I've ever read, and it's pretty short and a genuinely good story, I recommend everyone read it.
Edit: Just skimmed it again and realized there's an LLM-like prediction as well. Access to the Earth's surface is banned and some people complain, until "even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject."
There is even more to it than that. Also remember this is 1909. I think this classifies as a deeply mysterious story. It's almost inconceivable for that time period.
-people a depicted as grey aliens (no teeth, large eyes, no hair). Lesson the Greys are a future version of us.
The air is poisoned and ruined cities. People live in underground bunkers...1909...nuclear war was unimaginable then. This was still the age of steam ships and coal power trains. Even respirators would have been low on the public imagination.
The air ships with metal blinds sound more like UFOs than blimps.
The white worms.
People are the blood cells of the machine which runs on their thoughts social media data harvesting of ai.
China invaded Australia. This story was 8 years or so after the Boxer Rebellion so that would have sounded like say Iraq invading the USA in the context of its time.
The story suggests this is a cyclical process of a bifurcated human race.
The blimp crashing into the steel evokes 9/11, 91+1 years later...
Zamatyin’s We was prescient politically, socially and technologically - but didn’t fall into the trap of everyone being machine men with antennae.
It’s interesting - Forster wrote like the Huxley of his day, Zamyatin like the Orwell - but both felt they were carrying Wells’ baton - and they were, just from differing perspectives.
In other words, sometimes, things happen in reality that, if you were to read it in a fictional story or see in a movie, you would think they were major plot holes.
Kindles are just books and books are already mostly fairly compact and inexpensive long-form entertainment and information.
They're convenient but if they went away tomorrow, my life wouldn't really change in any material way. That's not really the case with smartphones much less the internet more broadly.
Funny, I had "The collected stories of Frank Herbert" as my next read on my tablet. Here's a juicy quote from like the third screen of the first story:
"The bedside newstape offered a long selection of stories [...]. He punched code letters for eight items, flipped the machine to audio and listened to the news while dressing."
Anything qualitative there? Or all of it quantitative?
Story is "Operation Syndrome", first published in 1954.
Hah, can't resist posting even if this story is old and dead by now.
Went further in Herbert's shorts volume and I just ran into a scene where people are preparing to leave Earth on a colony ship to seed some distant world...
... and they still have human operator assisted phone calls.
To the proud contrarian, "the empire did nothing wrong". Maybe Sci-fi has actually played a role in the "memetic desire" of some of the titans of tech who are trying to bring about these worlds more-or-less intentionally. I guess it's not as much of a dystopia if you're on top and its not evil if you think of it as inevitable anyway.
I don't know. Walking on everybody's face to climb a human pyramid, one don't make much sincere friends. And one certainly are rightfully going down a spiral of paranoia. There are so many people already on fast track to hate anyone else, if they have social consensus that indeed someone is a freaking bastard which only deserve to die, that's a lot of stress to cope with.
Future is inevitable, but only ignorants of self predictive ability are thinking that what's going to populate future is inevitable.
Still can't believe people buy their stock, given that they are the closest thing to a James Bond villain, just because it goes up.
I've been tempted to. "Everything will be terrible if these guys succeed, but at least I'll be rich. If they fail I'll lose money, but since that's the outcome I prefer anyway, the loss won't bother me."
Trouble is, that ship has arguably already sailed. No matter how rapidly things go to hell, it will take many years before PLTR is profitable enough to justify its half-trillion dollar market cap.
It goes a bit deeper than that since they got funding in the wake of 9/11 and the requests for intelligence and investigative branches of government to do better and coalescing their information to prevent attacks.
So "panopticon that if it had been used properly, would have prevented the destruction of two towers" while ignoring the obvious "are we the baddies?"
To be honest, while I'd heard of it over a decade ago and I've read LOTR and I've been paying attention to privacy longer than most, I didn't ever really look into what it did until I started hearing more about it in the past year or two.
But yeah lots of people don't really buy into the idea of their small contribution to a large problem being a problem.
>But yeah lots of people don't really buy into the idea of their small contribution to a large problem being a problem.
As an abstract idea I think there is a reasonable argument to be made that the size of any contribution to a problem should be measured as a relative proportion of total influence.
The carbon footprint is a good example, if each individual focuses on reducing their small individual contribution then they could neglect systemic changes that would reduce everyone's contribution to a greater extent.
Any scientist working on a method to remove a problem shouldn't abstain from contributing to the problem while they work.
Or to put it as a catchy phrase. Someone working on a cleaner light source shouldn't have to work in the dark.
>As an abstract idea I think there is a reasonable argument to be made that the size of any contribution to a problem should be measured as a relative proportion of total influence.
Right, I think you have responsibility for your 1/<global population>th (arguably considerably more though, for first-worlders) of the problem. What I see is something like refusal to consider swapping out a two-stroke-engine-powered tungsten lightbulb with an LED of equivalent brightness, CRI, and color temperature, because it won't unilaterally solve the problem.
Stock buying as a political or ethical statement is not much of a thing. For one the stocks will still be bought by persons with less strung opinions, and secondly it does not lend itself well to virtue signaling.
Well, two things lead to unsophisticated risk-taking, right... economic malaise, and unlimited surplus. Both conditions are easy to spot in today's world.
Saw a joke about grok being a stand-in for Elon's children and had the realization he's the kind of father who would lobotomie and brainwipe his progeny for back-talk. Good thing he can only do that to their virtual stand-in and not some biological clones!
This is a strange comment. It doesn't even count as unfalsifiable, just unsupported.
Elon Musk has actual children (lots, in fact). If we want to know what he "would" do, we can just look. We don't have to use our imaginations (or entertain the fanciful claims of prognosticators and soothsayers).
Zero percent chance this is anything other than laughably bad. The fact that they're trotting it out in front of the press like a double spaced book report only reinforces this theory. It's a transparent attempt by someone at the CIA to be able to say they're using AI in a meeting with their bosses.
Unless the world leaders they're simulating are laughably bad and tend to repeat themselves and hallucinate, like Trump. Who knows, maybe a chatbot trained with all the classified documents he stole and all his twitter and truth social posts wrote his tweet about Ron Reiner, and he's actually sleeping at 3:00 AM instead of sitting on the toilet tweeting in upper case.
Let me take the opposing position about a program to wire LLMs into their already-advanced sensory database.
I assume the CIA is lying about simulating world leaders. These are narcissistic personalities and it’s jarring to hear that they can be replaced, either by a body double or an indistinguishable chatbot. Also, it’s still cheaper to have humans do this.
More likely, the CIA is modeling its own experts. Not as useful a press release and not as impressive to the fractious executive branch. But consider having downtime as a CIA expert on submarine cables. You might be predicting what kind of available data is capable of predicting the cause and/or effect of cuts. Ten years ago, an ensemble of such models was state of the art, but its sensory libraries were based on maybe traceroute and marine shipping. With an LLM, you can generate a whole lot of training data that an expert can refine during his/her downtime. Maybe there’s a potent new data source that an expensive operation could unlock. That ensemble of ML models from ten years ago can still be refined.
And then there’s modeling things that don’t exist. Maybe it’s important to optimize a statement for its disinfo potency. Try it harmlessly on LLMs fed event data. What happens if some oligarch retires unexpectedly? Who rises? That kind of stuff.
To your last point, with this executive branch, I expect their very first question to CIA wasn’t about aliens or which nations have a copy of a particular tape of Trump, but can you make us money. So the approaches above all have some way of producing business intelligence. Whereas a Kim Jong Un bobblehead does not.
As an ego thing, obviously, but if we think about it a bit more, it makes sense for busy people. If you're the point person for a project, and it's a large project, people don't read documentation. The number of "quick questions" you get will soon overwhelm a person to the point that they simply have to start ignoring people. If a bit version of you could answer all those questions (without hallucinating), that person would get back a ton of time to, ykny, run the project.
You know when Claude Code for Terminal starts scroll-looping and doom-scrolling through the entire conversation in an uninterruptible fashion? Just try reading as much as of it as you can. It strengthens your ability to read code in an instant and keeps you alert. And if people watch you pretend to understand your screen, it makes you look like a mentat.
I wouldn’t recommend that (after having tried it for a while). It is very easy to get the impression that the changes are sound, because on their own they look good. But often when looking at the final result in a git diff tool, the holes and workarounds become way more noticeable.
claude-code rendering pipeline often goes awry, and start scrolling back and forward very fast, causing insane flickering. They have some explanation in https://github.com/anthropics/claude-code/issues/769#issueco... describing that they've made that tradeoff in the name of native terminal experience (instead of learning per-program shortcuts/etc).
Funnily enough, before reading their reasoning, I thought that the whole rendering pipeline was badly vibe-coded.
The logger library which Claude created is actually pretty simple, highly approachable code, with utilities for logging the timings of async code and the ability to emit automatic performance warnings.
I have been using LogTape (https://logtape.org) for JavaScript logging, and the inherited, category-focused logging with different sinks has been pretty great.
This is interesting. The argument which I’m gleaning from the essay is that the old proposed API of having an intermediary new Sanitizer() class with a sanitize(input) method which returns a string is actually insecure because of mutated XSS (MXSS) bugs.
The theory is that the parse->serialize->parse round-trip is not idempotent and that sanitization is element context-dependent, so having a pure string->string function opens a new class of vulnerabilities. Having a stateful setHTML() function defined on elements means the HTML context-specific rules for tables, SVG, MathML etc. are baked in, and eliminates double-parsing errors.
reply