Hacker Newsnew | past | comments | ask | show | jobs | submit | ori_b's commentslogin

On only one laptop?

> On only one laptop?

That's how a good benchmark looks like. From ancient wisdom (Linux Benchmarking Howto): " 5.3 Proprietary hardware/software

A well-known processor manufacturer once published results of benchmarks produced by a special, customized version of gcc. Ethical considerations apart, those results were meaningless, since 100% of the Linux community would go on using the standard version of gcc. The same goes for proprietary hardware. Benchmarking is much more useful when it deals with off-the-shelf hardware and free (in the GNU/GPL sense) software. "


The oddity is that Windows is slower everywhere but on this one specific kind of laptop, as far as I understand. If it's not a quirk of the laptop, windows would be better everywhere.

My prediction: If we can successfully get rid of most software engineers, we can get rid of most knowledge work. Given the state of robotics, manual labor is likely to outlive intellectual labor.

I would have agreed with this a few months ago, but something Ive learned is that the ability to verify an LLMs output is paramount to its value. In software, you can review its output, add tests, on top of other adversarial techniques to verify the output immediately after generation.

With most other knowledge work, I don't think that is the case. Maybe actuarial or accounting work, but most knowledge work exists at a cross section of function and taste, and the latter isn't an automatically verifiable output.


I also believe this - I think it will probably just disrupt software engineering and any other digital medium with mass internet publication (i.e. things RLVR can use). For the short term future it seems to need a lot of data to train on, and no other profession has posted the same amount of verifiable material. The open source altruism has disrupted the profession in the end; just not in the way people first predicted. I don't think it will disrupt most knowledge work for a number of reasons. Most knowledge professions have "credentials' (i.e. gatekeeping) and they can see what is happening to SWE's and are acting accordingly. I'm hearing it firsthand at least locally in things like law, even accounting, etc. Society will ironically respect these professions more for doing so.

Any data, verifiability, rules of thumb, tests, etc are being kept secret. You pay for the result, but don't know the means.


I mean law and accounting usually have a “right” answer that you can verify against. I can see a test data set being built for most professions. I’m sure open source helps with programming data but I doubt that’s even the majority of their training. If you have a company like Google you could collect data on decades of software work in all its dimensions from your workforce

It's not about invalidating your conclusion, but I'm not so sure about law having a right answer. At a very basic level, like hypothetical conduct used in basic legal training matrerials or MCQs, or in criminal/civil code based situations in well-abstracting Roman law-based jurisdictions, definitely. But the actual work, at least for most lawyers is to build on many layers of such abstractions to support your/client's viwepoint. And that level is already about persuasion of other people, not having the "right" legal argument or applying the most correct case found. And this part is not documented well, approaches changes a lot, even if law remains the same. Think of family law or law of succession - does not change much over centuries but every day, worldwide, millions of people spend huge amounts of money and energy on finding novel ways to turn those same paragraphs to their advantage and put their "loved" ones and relatives in a worse position.

Not really. I used to think more general with the first generation of LLM's but given all progress since o1 is RL based I'm thinking most disruption will happen in open productive domains and not closed domains. Speaking to people in these professions they don't think SWE's have any self respect and so in your example of law:

* Context is debatable/result isn't always clear: The way to interpret that/argue your case is different (i.e. you are paying for a service, not a product)

* Access to vast training data: Its very unlikely that they will train you and give you data to their practice especially as they are already in a union like structure/accreditation. Its like paying for a binary (a non-decompilable one) without source code (the result) rather than the source and the validation the practitioner used to get there.

* Variability of real world actors: There will be novel interpretations that invalidate the previous one as new context comes along.

* Velocity vs ability to make judgement: As a lawyer I prefer to be paid higher for less velocity since it means less judgement/less liability/less risk overall for myself and the industry. Why would I change that even at an individual level? Less problem of the commons here.

* Tolerance to failure is low: You can't iterate, get feedback and try again until "the tests pass" in a court room unlike "code on a text file". You need to have the right argument the first time. AI/ML generally only works where the end cost of failure is low (i.e can try again and again to iron out error terms/hallucinations). Its also why I'm skeptical AI will do much in the real economy even with robots soon - failure has bigger consequences in the real world ($$$, lives, etc).

* Self employment: There is no tension between say Google shareholders and its employees as per your example - especially for professions where you must trade in your own name. Why would I disrupt myself? The cost I charge is my profit.

TL;DR: Gatekeeping, changing context, and arms race behavior between participants/clients. Unfortunately I do think software, art, videos, translation, etc are unique in that there's numerous examples online and has the property "if I don't like it just re-roll" -> to me RLVR isn't that efficient - it needs volumes of data to build its view. Software sadly for us SWE's is the perfect domain for this; and we as practitioners of it made it that way through things like open source, TDD, etc and giving it away free on public platforms in numerous quantities.


"Given the state of robotics" reminds me a lot of what was said about llms and image/video models over the past 3 years. Considering how much llms improved, how long can robotics be in this state?

I have to think 3 years from now we will be having the same conversation about robots doing real physical labor.

"This is the worst they will ever be" feels more apt.


but robotics had the means to do majority of the physical labour already - it's just not worth the money to replace humans, as human labour is cheap (and flexible - more than robots).

With knowledge work being less high-paying, physical labour supply should increase as well, which drops their price. This means it's actually less likely that the advent of LLM will make physical labour more automated.


Robotics is coming FAST. Faster than LLM progress in my opinion.

Curious if you have any links about the rapid progression of robotics (as someone who is not educated on the topic).

It was my feeling with robotics that the more challenging aspect will be making them economically viable rather than simply the challenge of the task itself.


I mentioned military in my reply to the sibling comment - that is the most ready example. What anduril and others are doing today may be sloppy, but it's moving very quickly.

The question is how rapid the adoption is. The price of failure in the real world is much higher ($$$, environmental, physical risks) vs just "rebuild/regenerate" in the digital realm.

Military adoption is probably a decent proxy indicator - and they are ready to hand the kill switch to autonomous robots

Maybe. There the cost of failure again is low. Its easier to destroy than to create. Economic disruption to workers will take a bit longer I think.

Don't get me wrong; I hope that we do see it in physical work as well. There is more value to society there; and consists of work that is risky and/or hard to do - and is usually needed (food, shelter, etc). It also means that the disruption is an "everyone" problem rather than something that just affects those "intellectual" types.


That’s the deep irony of technology IMHO, that innovation follows Conway's law on a meta layer: White collar workers inevitably shaped high technology after themselves, and instead of finally ridding humanity of hard physical labour—as was the promise of the Industrial Revolution—we imitate artists, scientists, and knowledge workers.

We can now use natural language to instruct computers generate stock photos and illustrations that would take a professional artist a few years ago, discover new molecule shapes, beat the best Go players, build the code for entire applications, or write documents of various shapes and lengths—but painting a wall? An unsurmountable task that requires a human to execute reliably, not even talking about economics.


> If we can successfully get rid of most software engineers, we can get rid of most knowledge work

Software, by its nature, is practically comprehensively digitized, both in its code history as well as requirements.


Not cheap, unless that one specific model is going to be used across tens of millions of devices, with no updates, for the physical lifetime of the device.

Ah, that makes sense. Economies of scale. Thanks.

To quote a friend; "Glibc is a waste of a perfectly good stable kernel ABI"

Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself; it is just wrapped in a set of very stable userland exposures (Win32, UWP, etc.), and it’s those exposures that Windows executables are relying on. A theoretical Windows PE binary that was 100% statically linked (and so directly contained NT syscalls) wouldn’t be at-all portable between different Windows versions.

Linux with glibc is the complete opposite; there really does exist old Linux software that static-links in everything down to libc, just interacting with the kernel through syscalls—and it does (almost always) still work to run such software on a modern Linux, even when the software is 10-20 years old.

I guess this is why Linux containers are such a thing: you’re taking a dynamically-linked Linux binary and pinning it to a particular entire userland, such that when you run the old software, it calls into the old glibc. Containers work, because they ultimately ground out in the same set of stable kernel ABI calls.

(Which, now that I think of it, makes me wonder how exactly Windows containers work. I’m guessing each one brings its own NTOSKRNL, that gets spun up under HyperV if the host kernel ABI doesn’t match the guest?)


IIRC, Windows containers require that the container be built with a base image that matches the host for it to work at all (like, the exact build of Windows has to match). Guessing that’s how they get a ‘stable ABI’.

…actually, looks like it’s a bit looser these days. Version matrix incoming: https://learn.microsoft.com/en-us/virtualization/windowscont...


The ABI was stabilised for backwards compatibility since Windows Server 2022, but is not stable for earlier releases.

Apparently there are 3 kinds of Windows containers, one using HyperV, and the others sharing the kernel (like Linux containers)

https://thomasvanlaere.com/posts/2021/06/exploring-windows-c...


> Kind of funny to realize, the NT kernel ABI isn’t even all that stable itself

This is not a big problem if it's hard/unlikely enough to write a code that accidentally relies on raw syscalls. At least MS's dev tooling doesn't provide an easy way to bypass the standard DLLs.

> makes me wonder how exactly Windows containers work

I guess containers do the syscalls through the standard Windows DLLs like any regular userspace application. If it's a Linux container on Windows, probably the WSL syscalls, which I guess, are stable.


> NT kernel ABI isn’t even all that stable itself

Can you give an example where a breaking change was introduced in NT kernel ABI?


https://j00ru.vexillium.org/syscalls/nt/64/

(One example: hit "Show" on the table header for Win11, then use the form at the top of the page to highlight syscall 8c)


Changes in syscall numbers aren't necessarily breaking changes as you're supposed to use ntdll.dll to call kernel, not direct syscalls.

That was his point exactly.


The syscall numbers change with every release: https://j00ru.vexillium.org/syscalls/nt/64/

Syscall numbers shouldn't be a problem if you link against ntdll.dll.

So now you're talking about the ntdll.dll ABI instead of the kernel ABI. ntdll.dll is not the kernel.

NTDLL is NT’s kernel ABI, not syscalls. Nothing on Windows uses syscalls to call the kernel.

NTDLL isn’t some higher level library. It’s just a series of entry points into NT kernel.


Yes, the fact that functions in NTDLL issue a syscall instruction is a platform-specific implementation detail.

...isn't that the point of this entire subthread? The kernel itself doesn't provide the stable ABI, userland code that the binary links to does.

No. On NT, kernel ABI isn't defined by the syscalls but NTDLL. Win32 and all other APIs are wrappers on top of NTDLL, not syscalls. Syscalls are how NTDLL implements kernel calls behind the scenes, it's an implementation detail. Original point of the thread was about Win32, UWP and other APIs that build a new layer on top of NTDLL.

I argue that NT doesn't break its kernel ABI.


NTDLL APIs are very stable[0] and you can even compile and run x86 programs targeting NT 3.1 Build 340[1] which will still work on win11.

[0] as long as you don't use APIs they decided to add and remove in a very short period (longer read: https://virtuallyfun.com/2009/09/28/microsoft-fortran-powers...)

[1] https://github.com/roytam1/ntldd/releases/tag/v250831


macOS and iOS too — syscalls aren’t stable at all, you’re expected to link through shared library interfaces.

> No

...and you go on to not disagree with me at all? Why comment then?


Docker on windows isn't simply a glorified virtual machine running a Linux. aka Linux subsystem v2

At least glibc uses versioned symbols. Hundreds of other widely-used open source libraries don't.

Versioned glibc symbols are part of the reason that binaries aren't portable across Linux distributions and time.

Only because people aren't putting in the effort to build their binaries properly. You need to link against the oldest glibc version that has all the symbols you need, and then your binary will actually work everywhere(*).

* Except for non-glibc distributions of course.


> Only because people aren't putting in the effort to build their binaries properly.

Because Linux userland is an unmitigated clusterfuck of bad design that makes this really really really hard.

GCC/Clang and Glibc make it effectively impossible almost impossible to do this on their own. The only way you can actually do this is:

1. create a userland container from the past 2. use Zig which moved oceans and mountains to make it somewhat tractable

It's awful.


We are using Nix to do this. It’s only a few lines of code. We build a gcc 14 stdenv that uses an old glibc.

But I agree that this should just be a simple target SDK flag.

I think the issue is that the Linux community is generally hostile towards proprietary software and it’s less of an issue for FLOSS because they can always be compiled against the latest.


But to link against an old glibc version, you need to compile on an old distro, on a VM. And you'll have a rough time if some part of the build depends on a tool too new for your VM. It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.

Check out `zig cc`. It let's you target specific glibc versions. It's a pretty amazing C toolchain.

https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...


It's actually doable without an old glibc as it was done by the Autopackage project: https://github.com/DeaDBeeF-Player/apbuild

That never took off though, containers are easier. Wirh distrobox and other tools this is quite easy, too.


> It would be infinitely simpler if one could simply 'cross-compile' down to older symbol versions, but the tooling does not make this easy at all.

It's definitely not easy, but it's possible: using the `.symver` assembly (pseudo-)directive you can specify the version of the symbol you want to link against.


Huh? Bullshit. You could totally compile and link in a container.

Ok, so you agree with him except where he says “in a VM” because you say you can also do it “in a container”.

Of course, you both leave out that you could do it “on real hardware”.

But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct.


I'm not disagreeing that glibc symbol versioning could be better. I raised it because this is probably one of the few valid use cases for containers where they would have a large advantage over a heavyweight VM.

But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container.

As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.

But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure.


> But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task.

But does the lack of a stable ABI have any (negative) effect on the reliability of the platform?


Only for people who want to use it as a desktop replacement for Windows or MacOS I guess? There are no end of people complaining they can't get their wifi or sound card or trackpad working on (insert-obscure-Linux-distribution-here).

Like many others, I have Linux servers running over 2000-3000 days uptime. So I'm going to say no, it doesn't, not really.


>As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.

You must really be behind the times. Arch and Gentoo users wouldn't complain because an old game doesn't run. In fact the exact opposite would happen. It's not implausible for an Arch or Gentoo user to end up compiling their code on a five hour old release of glibc and thereby maximize glibc incompatibility with every other distribution.



If it requires effort to be correct, that's a bad design.

Why doesn't the glibc use the version tag to do the appropriate mapping?


I think even calling it a "design" is dubious. It's an attribute of these systems that arose out of the circumstance, nobody ever sat down and said it should be this way. Even Torvalds complaining about it doesn't mean it gets fixed, it's not analogous to Steve Jobs complaining about a thing because Torvalds is only in charge of one piece of the puzzle, and the whole image that emerges from all these different groups only loosely collaborating with each other isn't going to be anybody's ideal.

In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.


> In other words, the Linux desktop as a whole is a Bazaar, not Cathedral.

This was true in the 90s, not the 2020s.

There are enough moneyed interests that control the entirety of Linux now. If someone at Canonical or Red Hat thought a glibc version translation layer (think WINE, but for running software targeted for Linux systems made more than the last breaking glibc version) was a good enough idea, it could get implemented pretty rapidly. Instead of win32+wine being the only stable abi on Linux, Linux could have the most stable abi on Linux.


I don’t understand why this is the case, and would like to understand. If I want only functions f1 and f2 which were introduced in glibc versions v1 and v2, why do I have to build with v2 rather than v3? Shouldn’t the symbols be named something like glibc_v1_f1 and glibc_v2_f2 regardless of whether you’re compiling against glibc v2 or glibc v3? If it is instead something like “compiling against vN uses symbols glibc_vN_f1 and glibc_vN_f2” combined with glibc v3 providing glibc_v1_f1, glibc_v2_f1, glibc_v3_f1, glibc_v2_f2 and glbc_v3_f2… why would it be that way?

> why would it be that way?

It allows (among other things) the glibc developers to change struct layouts while remaining backwards compatible. E.g. if function f1 takes a struct as argument, and its layout changes between v2 and v3, then glibc_v2_f1 and glibc_v3_f1 have different ABIs.


Individual functions may have a lot of different versions. They do only update them if there is an ABI change (so you may have e.g. f1_v1, f1_v2, f2_v2, f2_v3 as synbols in v3 of glibc) but there's no easy way to say 'give me v2 of every function'. If you compile against v3 you'll get f2_v3 and f1_v2 and so it won't work on v2.

Why are they changing? And I presume there must be disadvantages to staying on the old symbols, or else they wouldn’t be changing them—so what are those disadvantages?

> You need to link against the oldest glibc version that has all the symbols you need

Or at least the oldest one made before glibc's latest backwards incompatible ABI break.


Yeah and nothing ever lets you pick which versions to link to. You're going to get the latest ones and you better enjoy that. I found it out the hard way recently when I just wanted to do a perfectly normal thing of distributing precompiled binaries for my project. Ended up using whatever "Amazon Linux" is because it uses an old enough glibc but has a new enough gcc.

You can choose the version. There was apgcc from the (now dead) Autopackage project which did just that: https://github.com/DeaDBeeF-Player/apbuild

It's not at all straightforward, it should be the kind of thing that's just a compiler flag, as opposed to needing to restructure your build process to support it.

Yeah that's what I meant. I also came across some script with redefinitions of C standard library functions that supposedly also allows you to link against older glibc symbols. I couldn't make it work.

Any half-decent SDK should allow you to trivially target an older platform version, but apparently doing trivial-seeming things without suffering is not The Linux Way™.


> Hundreds of other widely-used open source libraries don't.

Correct me if I'm wrong but I don't think versioned symbols are a thing on Windows (i.e. they are non-portable). This is not a problem for glibc but it is very much a problem for a lot of open source libraries (which instead tend to just provide a stable C ABI if they care).


> versioned symbols are a thing on Windows

There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup like InitCommonControlsEx, and another API functions will DLL resolve differently or behave differently. A similar tactic, require an SDK defined magic number as a parameter to some initialization functions, different magic numbers switching symbols from the same library; examples are WSAStartup and MFStartup.

Around Win2k they did side by side assemblies or WinSxS. Include a special XML manifest into embedded resource of your EXE, and you can request specific version of a dependent API DLL. The OS now keeps multiple versions internally.

Then there’re compatibility mechanics, both OS builtin and user controllable (right click on EXE or LNK, compatibility tab). The compatibility mode is yet another way to control versions of DLLs used by the application.

Pretty sure there’s more and I forgot something.


> There’re quite a few mechanics they use for that. The oldest one, call a special API function on startup [...]

Isn't the oldest one... to have the API/ABI version in the name of your DLL? Unlike on Linux which by default uses a flat namespace, on the Windows land imports are nearly always identified by a pair of the DLL name and the symbol name (or ordinal). You can even have multiple C runtimes (MSVCR71.DLL, MSVCR80.DLL, etc) linked together but working independently in the same executable.


Linux can do this as well, the issue is that just duplicates how many versions you need to have installed, and it's not that different in the limit from having a container anyway. The symbol versioning means you can just have the latest version of the library and it remains compatible with software built against old versions. (Especially because when you have multiple versions of a library linked into the same process you can wind up with all kinds of tricky behaviour if they aren't kept strictly separated. There's a lot of footguns in Windows around this, especially with the way DLLs work to allow this kind of seperation in the first place).

I did forget to mention something important. Since about Vista, Microsoft tends to replace or supplement C WinAPI with IUnknown based object-oriented ones. Note IUnknown doesn’t necessarily imply COM; for example, Direct3D is not COM: no IDispatch, IPC, registration or type libraries.

IUnknown-based ABIs exposing methods of objects without any symbols exported from DLLs. Virtual method tables are internal implementation details, not public symbols. By testing SDK-defined magic numbers like SDKVersion argument of D3D11CreateDevice factory function, the DLL implementing the factory function may create very different objects for programs built against different versions of Windows SDK.


There’s also API Sets: where DLLs like api-win-blah-1.dll acts as a proxy for another DLL both literally, with forwarder exports, and figuratively, with a system-wide in-memory hashmap between api set and actual DLL.

Iirc this is both for versioning, but also so some software can target windows and Xbox OS’s whilst “importing” the same api-set DLL? Caused me a lot of grief writing a PE dynamic linker once.

https://bookkity.com/article/api-sets


I only learned about glibc earlier today, when I was trying to figure out why the Nix version of a game crashes on SteamOS unless you unset some environ vars.

Turns out that Nix is built against a different version of glibc than SteamOS, and for some reason, that matters. You have to make sure none of Steam's libraries are on the path before the Nix code will run. It seems impractical to expect every piece of software on your computer to be built against a specific version of a specific library, but I guess that's Linux for you.


No, that's every bit of software out there. Dynamic linking really does cause that problem even though allegedly it has security benefits as the vendor is able to patch software vulnerabilities.

NixOS actually is a bit better in this respect since most things are statically linked. The only thing is that glibc is not because it specifically requires being dynamically linked.

This issue also applies to macOS with their Dylibs and also Windows with their DLLs. So saying that this is an issue with Linux is a bit disingenuous.

Until everybody standardizes on one singular executable format that doesn't ever change, this will forever be an issue.


Ask your friend if he would CC0 the quote or similar (not sure if its possible but like) I can imagine this being a quote on t-shirts xD

Honestly I might buy a T-shirt with such a quote.

I think glibc is such a pain that it is the reason why we have so vastly different package management and I feel like non glibc things really would simplify the package management approach to linux which although feels solved, there are definitely still issues with the approach and I think we should still all definitely as such look for ways to solve the problem


Non-glibc distros (musl, uclibc...) with package managers have been a thing for ages already.

And they basically hold under 0.01% of Linux marketshare and are completely shit.

I never spent much of my coding time on typing. My most productive coding is done in my head, usually a mile or so into a walk.

>usually a mile or so into a walk

My place for that is in the shower.

I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.

People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.


For me I can use LLMs to go from "hmm, I wonder if..." to a working MVP while I take the dogs for a walk.

Either I launch a task before I go, or start one with Claude Code Web on my phone.

Today's project was a DVD/Bluray library project I've been thinking of since the app I used before went from buy once to subscription-based.

5-10 minutes of writing the initial prompt and now I have a self-hosted web application that lets me take pics of the front and back cover of a DVD on my shelf and it'll feed it to an LLM to detect what movie it is and use an existing (also LLM-engineered) project of mine to enrich the data from TMDB/OMDB.

About an hour total and now I just need to put on a podcast, sit next to my DVD collection and grab pics of each for processing.


Or on the toilet.

Yes, email needs to be used differently in order to cause damage.

In which way that email, in the subject, caused any damage

How many slices of toast are you making a day?

If you fly a plane a millimeter, you're using less energy than making a slice of toast; would you also say that it's accurate that all global plane travel is more efficient than making toast?


1-2 slice a day and 1-50 chatgpt query per day. For me it would be within same order of magnitude, and I don't really care about both as both of them are dwarfed by my heater or aircon usage.

> perhaps it's worth recognizing your own fucking part in the thing you now hate, even if indirect.

Would that be the part of the post where he apologizes for his part in creating this?


That still doesn't make him credible on this topic nor does it make his rant anything more than a hateful rant in the big bucket of anti-AI shit posts. The guy worked for fucking Google. You literally can't be on a high horse having worked for Google for so long.

What a stupid take.

This is based on assuming 5 questions a day. YouTube would be very power efficient as well if people only watched 5 seconds of video a day.

How many tokens do you use a day?


It would be less power efficient as some of the associated costs/resources happen per request and also benefit from scale.

Alternatively: Downloading the entire state of all packages when you care about just one, it never works out.

O(1) beats O(n) as n gets large.


Seems to still work out for apt?

Not in the same sense. An analogy might be: apt is like fetching a git repo in which all the packages are submodules, so lazily fetched. Some of the package managers in the article seem to be using a monorepo for all packages - including the content. Others seem to have different issues - go wasn't including enough information in the top level, so all the submodules had to be fetched anyway. vcpkg was doing something with tree hashes which meant they weren't really addressible.

I consider apt kinds slow. I wish it were much faster.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: