Hacker Newsnew | past | comments | ask | show | jobs | submit | surajrmal's commentslogin

Google sponsors the python foundation as per this page: https://www.python.org/psf/sponsors/

Kinda crazy that the top level "Visionary Sponsor" is a donation level of $160k. There's also 0 sponsors at the $100k level. I was also surprised to see Netflix at $5k and Jane Street at $17k. Maybe they should give more but there's a lot of names absent and that says more

While certain teams within Google are using rust by default, I'm not sure rust is anywhere close in scale for new lines of code committed per week to c++.

For Android specifically, by Q3 of last year more new lines of Rust were being added per week than new lines of C++: https://security.googleblog.com/2025/11/rust-in-android-move...

Sharing a session is independent of the terminal emulator itself. Use tmux for that. There are a handful of good terminal emulators. Weztern, alacritty, and kitty are popular. I use. Tiling window manager so I prefer to avoid tabs and use alacritty for that reason.

Why do you think trippling the memory usage of a program is an acceptable tradeoff? It's not just GC pauses that are problematic with gc languages. Some software wants to run on systems with less than 4GiB of RAM.

So anything that uses a less popular language is considered over engineering? Distros support lots of different languages already and there are likely other packages built with zig already.

Google runs everything on their tpus which are substantially less costly than to make and use less energy to run. While I'm sure openai and others are bleeding money by subsidizing things, I'm not entirely sure that's true for Google (despite it actually being easier if they wanted to).

I don't quite understand how that is your take away. Could you clarify?

It's all about where the stable ABI exists. You can do anything in practice, but if you stray off the happy path it will result in pain. On PC OS, everything used C (or in Linux, syscall) ABI. On android the ABI is java based, and on iOS it's objc/swift based. These are deliberate choices and while they make some use cases more difficult, they are optimized for the use cases the companies care about. I'm personally preferential to a language agnostic IPC boundary being the abi, but that has its own cons as well.

You’re conflating ABI with primary language for frontend development.

Android, iOS and “PC” all use the C ABI at their C stack level. They just have different languages available for their primary SDK.

Windows doesn’t use a C api primarily for example, so your PC example is wrong. Mac shares the same frameworks as iOS so is no more Swift/objc than iOS. It’s just that you can’t really ship electron (JIT) or easily use Qt (licensing) on iOS. But you can just as happily develop entire apps in the same C as you could on a “PC”. Case in point, blender builds for iOS.

Android is definitely the most out-there of the platforms because the jump from JNI to Java SDk is quite large but that is completely orthogonal to what you’re incorrectly claiming. Your comment is conflating completely opposite ends of the stack, but if we go by your definition, Android is Linux just as much as Linux distros on desktop.


Just because you can use it from C doesn't mean it's a C ABI. You can do almost anything from C, but the semantics of the APIs require additional work in order to use correctly. Just because golang can interface with C doesn't mean C APIs have a golang ABI right?

Which of the platforms (other than Android) require you to do extra work to talk to the underlying system API with a C ABI?

iOS certainly doesn’t require any extra work, and even exposes several C api for frameworks as well (not even including ObjC).

And even in the case of Android, the system provides JNI facilities so it’s not foreign to the system.


ABI is the language used to write the OS, thus OP is kind of right.

While Windows has moved away from pure C, and nowadays has ABIs across C, C++, .NET, COM, WinRT interfaces, you can still program Windows applications in straight C.

The caveat is to only use APIs up to Windows XP, and Petzold's book to follow along.


They’re describing higher level API that may have a separate ABI than the lower level system.

But like I said, they’re conflating the lower level ABI with the higher level API/ABI.

All the systems they mentioned have an equal C ABI available for talking to the core system.


No they don't, you cannot use C in Android outside the NDK, and even on the NDK you need to go through JNI for 80% of the OS APIs.

This is the only set of APIs exposed via a C API to Android applications,

https://developer.android.com/ndk/guides/stable_apis

You can argue that JNI technically is exposed via C, yeah if you ignore the JVM/ART semantics that go along with it.

Likewise on Windows, technically you can use bare bones structs with function pointers to deal with COM, use CLR COM APIs to call .NET via reflection, and a similar story with WinRT, but it is not going to be fun, and then there is the whole type libraries that have to be manually created.


I think we’re talking past each other, and you’re largely repeating what I already covered .

My original response delineated between levels of the stack, and also already called out that Android requires you to use the NDK/JNI to use the C ABI.

I also specifically called out windows as well.

My point is that the original persons distinction of what supports a C ABI is conflating different levels of the stack. It’s not a useful distinction when describing the platforms and the windows case is why I quote “PC” since desktop semantics vary quite a bit as well

A more useful delineation of why mobile dev is harder to just do an asm hello world is that mobile dev doesn’t really have a concept of CLIs without jumping through some hoops first. So you have to pipe such a thing through some kind of UI framework as well.


If userspace needs to use NDK/JNI ABI to call the Linux C ABI, naturally the OS ABI isn't the C ABI, by definition.

Why not? The NDK/JNI calls are still in user space themselves. So what delineation are you trying to make here?

How userspace applications talk to the kernel subsystems, in a legal way without hacking the operating system architecture.

Small correction.

On PC, MS-DOS did not use C, rather interrupts and there was no common C ABI.

On OS/2, a mix of C ABI and SOM, with C, C++ and Smalltalk as main languages.

Windows started only with the C ABI, nowadays it is a mix of C, C++, .NET, COM, WinRT, depending on the subsystem.


The best of both worlds is hosting the binary independently of git in some cloud storage and just have a script that fetches it (and set it in .gitignore). git itself doesn't like binaries very much and it will bloat your git clone speed/size if you update the binary ad it will effectively store all versions.

The specific use case here is someone storing the binary because they're _avoiding_ updates.

Or just use Git LFS.

I think this is happening by raising the floor for job roles which are largely boilerplate work. If you are on the more skilled side or work in more original/ niche areas, AI doesn't really help too much. I've only been able to use AI effectively for scaling refactors, not really much in feature development. It often just slows me down when I try to use it. I don't see this changing any time soon.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: