Hacker Newsnew | past | comments | ask | show | jobs | submit | Maxious's commentslogin

There's screenshots here, they're visually seperated from the actual response https://x.com/connorado/status/2009707660988559827

An Ad based model although sucks, still feels like a decent model of income than companies which provide inference at loss making, interesting.

I hate the Ad models but I am pretty sure that most code gets trained in AI anyway and the code we generate would probably not be valuable metric (usually) to the ad company.

Interesting, what are your thoughts about it? Thanks for sharing this. Is the project profitable because I assume not, not sure how much advertisements costs would be there.


It only took using claude code or other emoji heavy apps to reproduce and the memory grows linearly over time https://github.com/ghostty-org/ghostty/discussions/9786

"only"... I don't think that means what you think it means in this context.

And the same diagnosis in the blog post was reported by a user in discussions a month ago but ignored https://github.com/ghostty-org/ghostty/discussions/9786#disc...

That doesn't sound like the actual issue, or am I not understanding it correctly?

I think you’re correct. the reproduction isn’t very precise and the solution doesn’t seem right (I’m not seeing anything about the non-standard pages not being freed). I’d guess this was ignored because it was wrong…

infrastructure costs are already covered

> Vercel sponsors all of our hosting for all of our sites (which is expensive with our traffic!) for free and has for years

https://x.com/adamwathan/status/2009298745398018468


For example, memory leak investigation is currently spread across discussions, x/twitter and discord https://x.com/mitchellh/status/2004938171038277708 https://x.com/alxfazio/status/2004841392645050601 https://github.com/ghostty-org/ghostty/discussions/10114 https://github.com/ghostty-org/ghostty/discussions/9962

but has not graduated to issue worthy status


That's a shame to hear. I had to give up on Ghostty because of its memory leak issue. Granted, it was on an 8GB system, but that should be enough to run a terminal without memory exhaustion a few times a week. Foot has been rock solid, even though it lacks some of Ghostty's niceties.

Note that this is an active discussion where we're trying to get to a point of clarity where we can promote to an issue (when it is actionable). The discussion is open and this is the system working as intended!

I want to clarify though that there isn't a known widespread "memory leak issue." You didn't say "widespread", but just in case that is taken by anyone else. :) To clarify, there are a few challenges here:

1. The report at hand seems to affect a very limited number of users (given the lack of reports and information about them). There are lots of X meme posts about Ghostty in the macOS "Force Close" window using a massive amount of RAM but that isn't directly useful because that window also reports all the RAM _child processes_ are using (e.g. if you run a command in your shell that consumes 100 GB of RAM, macOS reports it as Ghostty using 100 GB of RAM). And the window by itself also doesn't tell us what you were doing in Ghostty. It farms good engagement, though.

2. We've run Ghostty on Linux under Valgrind in a variety of configurations (the full GUI), we run all of Ghostty's unit tests under Valgrind in CI for every commit, and we've run Ghostty on macOS with the Xcode Instruments leak checker in a variety of configurations and we haven't yet been able to find any leaks. Both of these run fully clean. So, the "easy" tools can't find it.

3. Following point 1 and 2, no maintainer familiar with the codebase has ever seen leaky behavior. Some of us run a build of Ghostty, working full time in a terminal, for weeks, and memory is stable.

4. Our Discord has ~30K users, and within it, we only have one active user who periodically gets a large memory issue. They haven't been able to narrow this down to any specific reproduction and they aren't familiar enough with the codebase to debug it themselves, unfortunately. They're trying!

To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users. That's why the discussion is open and we're soliciting input. I even spent about an hour today on the latest feedback (posted earlier today) trying to use that information to narrow it down. No dice, yet.

If anyone has more info, we'd love to find this. :)


This illustrates the difficulty of maintaining a separation between bugs and discussions:

> To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users

In this case it seems you believe a bug exists, but it isn't sufficiently well-understood and actionable to graduate to the bug tracker.

But the threshold of well-understood and actionable is fuzzy and subjective. Most bugs, in my experience, start with some amount of investigative work, and are actionable in the sense that some concrete steps would further the investigation, but full understanding is not achieved until very late in the game, around the time I am prototyping a fix.

Similarly the line between bug and feature request is often unclear. If the product breaks in specific configuration X, is it a bug, or a request to add support for configuration X?

I find it easier to have a single place for issue discussion at all stages of understanding or actionability, so that we don't have to worry about distinctions like this that feel a bit arbitrary.


Is the distinction arbitrary? It sounded like issues are used for clear, completable jobs for the maintainers. A mysterious bug is not that. The other work you describe is clearly happening, so I'm not seeing a problem with this approach other than its novelty for users. But to me it looks both clearer than the usual "issue soup" on a popular open source project and more effective at using maintainer time, so next time I open-source something I'd be inclined to try it.

Some people see "bug tracker" and think "a vetted report of a problem that needs fixing", others see "bug tracker" and think "a task/todo list of stuff ready for an engineer to work on"

Both are valid, and it makes sense to be clear about what the teams view is


Agreed. Honestly, I think of those as two very different needs that should have very different systems. To me a bug tracker is about collecting user reports of problems and finding commonalities. But most work should be driven by other information.

I think the confusion of bug tracking with work tracking comes out of the bad old days where we didn't write tests and we shipped large globs of changes all at once. In that world, people spent months putting bugs in, so it makes sense they'd need a database to track them all after the release. Bugs were the majority of the work.

But I think a team with good practices that ships early and often can spend a lot more time on adding value. In which case, jamming everything into a jumped-up bug tracker is the wrong approach.


I think these are valid concerns for a project maintainer to think through for managing a chosen solution but I don't think there is a single correct solution. The "correct", or likely least bad, solution depends on the specific project and tools available.

For bug reports, always using issues for everything also requires you to evaluate how long an issue should exist before it is closed out if it can't be reproduced(if trying to keep a clean issue list). That could lead to discussion fragmentation if now new reports start coming in that need to be reported, but not just anyone can manage issue states, so a new one is created.

From a practical standpoint, they have 40 pages of open discussion in the project and 6 pages of open issues, so I get where they're coming from. The GH issue tracker is less than stellar.


I have a one bit that might be useful that I learned from debugging/optimizing Emacs.

macOS' Instruments tool only checks for leaks when it can track allocations and it is limited to ~256 stack depth. For recursive calls or very deep stacks (Emacs) some allocations aren't tracked and only after setting malloc history flags [0] I started seeing some results (and leaks).

Another place I'm investigating (for Emacs) is that AppKit lifecycle doesn't actually align with Emacs lifecycle and so leaks are happening on the AppKit and that has ZERO to do with application. Seems that problem manifests mostly on a high end specs (multiple HiDPI displays with high variable refresh rate, powerful chip etc.)

Probably nothing you haven't investigated yet, but it is similar to the ghost (pun intended) I've been looking for.

[0]: https://developer.apple.com/library/archive/documentation/Pe...


I’ve been a very happy user for 2025, with some edge cases around the terminal not working on remote shells. I haven’t seen any memory leaks, but wanted to say I appreciate this detailed response.

In my experience, the remote shell weirdness is usually because the remote shell doesn’t recognise ghostty’s TERM=xterm-ghostty value. Fixed by either copying over a terminfo with it in, or setting TERM=xterm-256color before ssh’ing: https://ghostty.org/docs/help/terminfo

The terminfo database is one of those thankless xkcd dependencies. In this case, it's been thanklessly maintained since forever by Thomas Dickey.

https://xkcd.com/2347/


I spotted Ghostty using 20GB+ memory a few days ago on MacOS (according to Activity Monitor). I went through all my tmux sessions, killed everything, it was still 20GB+ so I re-started Ghostty. If I see it happen again, I'll take some notes.

Complete speculation, but does tmux use the xterm alternative screen buffer? I can see a small bug in that causing huge memory leaks, but not showing up in testing.

On some level, that's impressive. Any idea of how long Ghostty was alive? Maybe this a new feature where Ghostty stores LLM model parameters in the terminal scrollback history? /s

Not which parts of this are sarcastic or not, but it was probably running for a few weeks. High variance on that estimate though. I was running 5+ Claude Code instances and a similar number of vim instances.

Is it possible for Ghostty to figure out how much memory its child processes (or tabs) are using? If so maybe it would help to surface this number on or near the tab itself, similar to how Chrome started doing this if you hover over a tab. It seems like many of these stem from people misinterpreting the memory number in Activity Monitor, and maybe having memory numbers on the tabs would help avoid that.

Regarding point 4: why the user should be familiar with the codebase to investigate it? Shouldn't they create a memory dump and send it to dev team?

They don't have to be, but without a reproduction for maintainers, its up to the end users to provide enough information for us to track it down, and this user hasn't been able to yet.

The point is to reduce reported issues from non maintainers as close to 0 as possible. This does that.

I also see ghosty consume a massive amount of memory and periodically need to restart.

You might want to ask your user who can reproduce it to try heaptrack. It tracks allocations, whether they leak or not. If that doesn't find anything, check the few other ways that a program can require memory, such as mmap() calls and whatever else the platform documentation tells you.

Memory usage is not really difficult to debug usually, tbh.


Valgrind won’t show you leaks where you (or a GC) still holds a reference. This could mean you’re holding on to large chunks of memory that are still referenced in a closure or something. I don’t know what language or anything about your project, but if you’re using a GC language, make sure you disable GC when running with valgrind (a common mistake). You’ll see a ton of false positives that the GC would normally clean up for you, but some of those won’t be false positives.

Ghostty is written in Zig.

It will, but they will be abbreviated (only total amount shown, not the individual stack traces) unless you ask to show them in full.

I’m sure they would appreciate a report as it doesn’t seem that it can be reproduced yet

btw, is it me or is there any justification for anyone including a developer to run more than 8GB of RAM for a laptop? I don't see functionality as having changed in the last 15 years.

For me, only Rust compilation necessitates more RAM. But, I assume devs just do RAM heavy dev work on a server over ssh.


There's all the usual "$APPLICATION is a memory hog" complaints, for one.

In the SWE world, dev servers are a luxury that you don't get in most companies, and most people use their laptops as workstations. Depending on your workflow, you might well have a bunch of VMs/containers running.

Even outside of SWE world, people have plenty of use for more than 8GiB of RAM. Large Photoshop documents with loads of layers, a DAW with a bazillion plugins and samples, anything involving 4k video are all workloads that would struggle running on such a small RAM allowance.


This depends on industry. Around here, working locally on laptop is a luxury, and most devs are required to treat their laptop like a thin client.

Of course, being developer laptops, they all come with 16 gigs of RAM. In contrast, the remote VMs where we do all of the actual work are limited to 4GiB unless we get manager and IT approval for more.


Interesting. I required all my devs to use local VMs for development. We've saved a fair bit on cloud costs.

> We've saved a fair bit on cloud costs

our company just went with the "server in the basement" approach, with every employee having a user account (no VM or docker separation, just normal file permissions). Sure, sounds like the 80s, but it works rearly well. Remote access with wireguard, uptime similar or better than cloud, sharing the same beefy CPUs works well and gives good utilization. Running jobs that need hundreds of GB of RAM isn't an issue as long as you respect other's needs too dont hog the RAM all day. And in amortized costs per employee its dirt cheap. I only wish we had more GPUs.


> Interesting. I required all my devs to use local VMs for development.

It doesn’t work when you’re developing on a large database, since it won’t fit. Database (and data warehouse) development has been held back from modern practices just for this reason.


Current job used to let us run containers locally, but they decided to wrap initially docker, and then podman with "helper" scripts. These broke regularly, and became too much overhead to maintain so we are mandated to do local dev but access a dev k8 cluster to perform any level of testing that is more than unit and requires a db.

A really shame as running local docker/podman for postges was fine when you just ran the commands.


I find this quite surprising! What benefit does your org accrue by mandating that the db instance used for testing is centralised? Where I am, the tests simply assume that there’s a database available on a certain port. docker-compose.yml makes it easy to spin this up for those so inclined. At that stage it’s immaterial whether it’s running natively, or in docker, or forwarded from somewhere else. Our tests stump up all the data they need and tear down the db afterwards. In contrast, I imagine that a dev k8s cluster requires some management and would be a single point of failure.

I really don't understand why they do what they do.

Large corp gotta large corp?

My guess is that providing the ability to pull containers means you can run code that they haven't explicitly given permission for, and the laptop scanning tools can't hijack them?


For many companies, IP isn’t allowed to leave environments controlled by the company, which employee laptops are not.

Yes, zero latency typing in your local IDE on a laptop sounds like the dream.

In enterprise, we get shared servers with constant connection issues, performance problems, and full disks.

Alternatively we can use Windows VMs in Azure, with network attached storage where "git log" can take a full minute. And that's apparently the strategic solution.

Not to mention that in Azure 8 CPUs gets you four physical cores of a previous gen server CPU. To anyone working with 4 CPUs or 2 physical cores: good luck.


Browser + 2 vscode + 4 docker container + MS Teams + postman + MongoDB Compass

Sure it is bloated, but it is the stack we have for local development


> But, I assume devs just do RAM heavy dev work on a server over ssh.

This assumption is wrong. I compile stuff directly on my laptop, and so do a lot of other people.

Also, even if nobody ran compilers locally, there is still stuff like rustc, clangd, etc. which take lots of RAM.


Chrome on my work laptop sits around 20-30GB all day every day.

I wonder if having less RAM would compel you to read, commit to long term memory, and then close those 80 tabs you have open.

The issue for me is that bookmarks suck. They don't store the state (where I was reading) and they reload the webpage so I might get something else entirely when I come back. They also kinda just disappear from sight.

If instead bookmarks worked like tab saving does, I would be happy to get rid of a few hundred tabs. Have them save the page and state like the tab saving mechanism does. Have some way to remind me of them after a week or month or so.

Combine that with a search function that can search in contents as well as the title, and I'm changing habbits ASAP.


Regarding wanting to preserve the current version of a page: I use Karakeep to archive those pages. I am sure there are other similar solutions such as downloading an offline version, but this works well for me.

I do this mostly for blog posts etc I might not get around to reading for weeks or months from now, and don't want them to disappear in the meantime.

Everything else is either a pinned tab (<5) or a bookmark (themselves shared when necessary on e.g a Slack canvas so the whole team has easy access, not just me).

While browsing the rest of my tabs are transient and don't really grow. I even mostly use private browsing for research, and only bookmark (or otherwise save) pages I deem to be of high quality. I might have a private window with multiple tabs for a given task, but it is quickly reduced to the minimum necessary pages and the the whole private window is thrown away once the initial source material gathering is done. This lets me turn off address bar search engines and instead search only saved history and bookmarks.

I often see colleagues with the same many browser windows of many tabs each open struggling to find what they need, and ponder their methods.


I've started using Karakeep as well, however I don't find its built-in viewer as seamless as a plain browser page. It's also runs afoul of pages which combats bots due to its headless chrome.

Anyway, just strikes me as odd that the browsers have the functionality right there, it's just not used to its full potential.


Websites that are walled off behind obscure captcha don't do well in Karakeep for sure, but so far for me those are usually e-commerce sites or sites I don't return to anyway.

If I'm doing work than involves three different libraries, I'm not reading and committing to memory the whole documentation for each of those libraries. I might well have a few tabs with some of those libraries' source files too. I can easily end up with tens of tabs open as a form of breadcrumb trail for an issue I'm tracking down.

Then there's all the basic stuff — email and calendar are tabs in my browser, not standalone applications. Ditto the the ticket I'm working on.

I think the real issue is that browsers need to some lightweight "sleep" mechanism that sits somewhere between a live tab and just keeping the source in cache.


I wonder if a good public flogging would compel chrome and web devs to have 80 tabs take up far less than a gigabyte of memory like they should in a world where optimization wasn’t wholesale abandoned under the assumption that hardware improvements would compensate for their laziness and incompetence.

The high memory usage is due to the optimization. Responsiveness, robustness and performance was improved by making each tab independent processes. And that's good. Nobody needs 80 tabs, that's what bookmarks are for.

"that's what bookmarks are for"

And if you are lucky, the content will still be there the next time.


Is there a straightforward way to have one-process-per tab in browsers without using significant amounts (O(n_tabs)) of memory?

There is no justification for that IMHO. The program text only needs to be in memory once. However, each process probably has its own instance of the JS engine, together with the website's heap data and the JIT-compiled code objects. That adds up.

I'd very much like a crash in one tab not to kill other tabs. And having per tab sandboxing would be more secure, no?

What do you mean? All these features are provided by process per tab.

Thats a weird assumption to make.

~10 projects in Cursor is 25GB on it's own.

How much would it take up if there was less RAM available. A web browser with a bunch of tabs open but not active seems like the type of system that can increase RAM usage by caching, and decrease it by swapping (either logically at the application level, or letting the OS actually swap)

The computer has 18GB of total RAM so I would hope that it’s already trying to conserve memory.

It’s kind of humorous that everyone interpreted the comment as complaining about Chrome. For all I know, it’s justified in using that much memory, or it’s the crappy websites I’m required to use for work with absurdly large heaps.

I really just meant that at least for work I need more than 8GB of RAM.


I do work off of a Chromebook with 8GB of RAM total, but I do keep an eye on how many tabs I have open.

You asked if there is a justification and then in the same post justified why you need it.

My post was about laptop RAM. I counted server-side RAM as a separate thing.

>But, I assume devs just do RAM heavy dev work on a server over ssh.

Why do you assume that? Its nice to do things locally sometimes. Maybe even while having a browser open. It doesn't take much to go over 8gb.


With 32 GB I can run two whole Electron applications! Discord and Slack!

It's a life of luxury, I tell you.


Browsers can get quite bloated, especially if one is not in the habit of closing tabs or restarting it from time to time. IDEs, other development tools, and most Electron abominations are also not shy about guzzling memory.

The author says in the first link he only heard it reported twice, which I'm guessing is the latter two links (the two discussions)

Your second link looks like an X user trying to start a flamewar; the rest of the replies are hidden to me.


For those who have the issue.

I reported the issue in discussions some time ago, but had no reaction/response.

I was able to reproduce the leak consistently. Finally I've got all the reports done by me, Ghostty sources and Claude Code and tried to fix it.

For the first couple of weeks there were no leaks at all, now it started again but only 1/10 of the times it was before.

https://github.com/ghostty-org/ghostty/discussions/9786 There are some logs and a Claude Code review md file that might be useful.

Hope it will help someone investigate further.


It can also be remarkably picky about GPUs, even otherwise well-supported integrated GPUs, but any discussion of this is declared a GTK problem (or an nvidia problem, even on intel).

It's not clear to me why I'd want to use Ghostty over WezTern or Kitty, TBH. Ghostty is certainly trends on social media, but that's a negative signal for me.

Seems like the contributors don't feel like it's clear enough yet to make an actionable issue and needs more discussion. Are you a contributor?

Details are still emerging, update in the last hour was that at least 5 different hacking groups were in ubisoft's systems and yeah some might have got their via bribes rather than mongodb https://x.com/vxunderground/status/2005483271065387461


I’ll give you $1000 to run Mongo.


Both Claude Pro and Google Antigravity free tier have Opus 4.5


If you want to add custom lsps, they need to be wrapped in a Claude code plugin which is where the little bit of actual documentation can be found https://code.claude.com/docs/en/plugins-reference


There's two other sites for the time.nist.gov service so it'll be okay.

Probably more interesting is how you get a tier 0 site back in sync - NIST rents out these cyberpunk looking units you can use to get your local frequency standards up to scratch for ~$700/month https://www.nist.gov/programs-projects/frequency-measurement...


What happens in the event all the sites for time.nist.gov go down? is it included in the spec?

Also thank you for that link, this is exactly the kind of esoteric knowledge that I enjoy learning about


Most high-availability networks use pool.ntp.org or vendor-specific pools (e.g., time.cloudflare.com, time.google.com, time.windows.com). These systems would automatically switch to a surviving peer in the pool.

Many data centers and telecom hubs use local GPS/GNSS-disciplined oscillators or atomic clocks and wouldn’t be affected.

Most laptops, smartphones, tablets, etc. would be accurate enough for days before drift affected things for the most part.

Kerberos requires clocks to be typically within 5 minutes to prevent replay attacks, so they’d probably be ok.

Sysadmins would need to update hardcoded NTP configurations to point to secondary servers.

If timestamps were REALLY off, TLS certificates might fail, but that’s highly unlikely.

Databases could be corrupted due to failure of transaction ordering.

Financial exchanges are often legally required to use time traceable to a national standard like UTC(NIST). A total failure of the NIST distribution layer could potentially trigger a suspension of electronic trading to maintain audit trail integrity.

Modern power grids use Synchrophasors that require microsecond-level precision for frequency monitoring. Losing the NIST reference would degrade the grid's ability to respond to load fluctuations, increasing the risk of cascading outages.


Great list! Just double-checked the CAT timekeeping requirements [1] and the requirement is NIST sync. So a subset of all UTC.

You don’t need to actually sync to NIST. I think most people PTP/PPS to a GPS-connected Grandmaster with high quality crystals.

But one must report deviations from NIST time, so CAT Reporters must track it.

I think you are right — if there is no NIST time signal then there is no properly auditable trading and thus no trading. MFID has similar stuff but I am unfamiliar.

One of my favorite nerd possessions is my hand-signed letter from Judah Levine with my NIST Authenticated NTP key.

[1] https://www.finra.org/rules-guidance/rulebooks/finra-rules/6...


Considering how many servers are in existence, probably the exact same procedure for starting a brand new one?


I must have one of those units oh my god


Someone needs to sell replicas (forgive the pun) of these.


It's like a toaster oven, but it toasts time.


To get the production level performance, you do need the RDNA compatible hardware.

However, vLLM supports multi node clusters over normal ethernet too https://docs.vllm.ai/en/stable/serving/parallelism_scaling/#...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: