It's common in fur farms that they breed the animals that are most friendly to humans, and in only a few generations those animals behave like very friendly pets, and killing them becomes more difficult.
A NOVA episode on dogs showed a Russian study where they bred the most friendly foxes with each other and did the same with the most aggressive (so extremes in both directions). The aggressive-bred animals were like that scene in I Am Legend where he checks on his infected rats. They were being fed, and they still wanted to kill their feeders. Kind of terrifying.
Also WebAssembly is meant to be a compiler target with the biggest advantage being it's sandboxed. The problem is that the JS engines can do that too. Just like JS engines, WebAssembly can run outside browsers. I think in theory Wasm is better then JS in those areas, but not better enough.
I agree to both 3) and 8) but I find it a dilemma that if you don't get it perfect the first time, you will waste thousands of man-hours for everyone to upgrade even though it only took you 10 minutes to release the new version.
It's all about where the stable ABI exists. You can do anything in practice, but if you stray off the happy path it will result in pain. On PC OS, everything used C (or in Linux, syscall) ABI. On android the ABI is java based, and on iOS it's objc/swift based. These are deliberate choices and while they make some use cases more difficult, they are optimized for the use cases the companies care about. I'm personally preferential to a language agnostic IPC boundary being the abi, but that has its own cons as well.
You’re conflating ABI with primary language for frontend development.
Android, iOS and “PC” all use the C ABI at their C stack level. They just have different languages available for their primary SDK.
Windows doesn’t use a C api primarily for example, so your PC example is wrong. Mac shares the same frameworks as iOS so is no more Swift/objc than iOS. It’s just that you can’t really ship electron (JIT) or easily use Qt (licensing) on iOS. But you can just as happily develop entire apps in the same C as you could on a “PC”. Case in point, blender builds for iOS.
Android is definitely the most out-there of the platforms because the jump from JNI to Java SDk is quite large but that is completely orthogonal to what you’re incorrectly claiming. Your comment is conflating completely opposite ends of the stack, but if we go by your definition, Android is Linux just as much as Linux distros on desktop.
Just because you can use it from C doesn't mean it's a C ABI. You can do almost anything from C, but the semantics of the APIs require additional work in order to use correctly. Just because golang can interface with C doesn't mean C APIs have a golang ABI right?
ABI is the language used to write the OS, thus OP is kind of right.
While Windows has moved away from pure C, and nowadays has ABIs across C, C++, .NET, COM, WinRT interfaces, you can still program Windows applications in straight C.
The caveat is to only use APIs up to Windows XP, and Petzold's book to follow along.
You can argue that JNI technically is exposed via C, yeah if you ignore the JVM/ART semantics that go along with it.
Likewise on Windows, technically you can use bare bones structs with function pointers to deal with COM, use CLR COM APIs to call .NET via reflection, and a similar story with WinRT, but it is not going to be fun, and then there is the whole type libraries that have to be manually created.
I think we’re talking past each other, and you’re largely repeating what I already covered .
My original response delineated between levels of the stack, and also already called out that Android requires you to use the NDK/JNI to use the C ABI.
I also specifically called out windows as well.
My point is that the original persons distinction of what supports a C ABI is conflating different levels of the stack. It’s not a useful distinction when describing the platforms and the windows case is why I quote “PC” since desktop semantics vary quite a bit as well
A more useful delineation of why mobile dev is harder to just do an asm hello world is that mobile dev doesn’t really have a concept of CLIs without jumping through some hoops first. So you have to pipe such a thing through some kind of UI framework as well.
A ton of native apps are written on mobile. On desktop, there is a trend of shipping a full browser together with a goddamn webapp instead of making a proper desktop app. I wouldn't say that desktop is more successful there...
I used multi seat in Linux with SystemD, i just threw in some old grapchics cards and sound cards in my gaming PC so that the children could play on separate monitors while I worked. Multi seat is very cool. When upgrading to a new gaming PC it was much cheaper to build 4 separate machines because cpu's and motherboards with enough pcie lanes are very expensive.
GPU's still run at decent performance with half the pcie lanes available, so if you already got a gaming PC with many slots and dont need top performance it could still be worth it to get two more cheap gpus and use multi seats - for those building a mini lan gaming room at home.
One annoying thing is that linux cant run many different GPU drivers at the same time, so you have to make sure the cards work with the same driver.
Properitary 3rd party multi seat also exist for Windows, but Linux has built in support and its free.
I am super curious about your setup. I played with MS years ago, but I lost the need. It is a super cool tech that I'd love to see its efficiencies embraced in some way.
Install an old GPU,
Connected a monitor to the extra GPU,
connect mouse and keyboard,
Use the loginctl command to list available devices/usb ports and attach them to a seat.
I suggest using Arch linux although loginctl should be available in all distributions using SystemD now.
If you don't have enough USB ports you can use a USB hub, some monitors comes with USB hub. And some with built in sound, or you can use wireless headset.
My main issue was that driver support was dropped for my oldest GPU card. So one day when I upgraded the OS it just stopped working. So to be on the safe side get another GPU like the one you already have.
It might be possible one way or another, although I used separate gfx cards. You might find a different x server that lets you do it using a single card. I suggest getting an extra hard drive, install grub so you can dual boot, install the experimental OS on the extra hard drive and start fiddling. I also dual booted different OS in order to experiment with machine learning when the kids was sleeping, as I got multiple gpus, although the old ones where crap it was a good learning exercise.
Sorry being IT guy I wondered about the logic. I understand the need to align. But if one fail all fail and children like customers … not the patient kind? Or you have two system within each … then across the …. Sorry cannot stop my mind spinning.
This is also a reason why I have 4 separate stations now, if I have to upgrade hardware only one station is down at a time. And while you can get a 3 GPU system at budget, a 4 GPU system will get expensive, at least last time I checked. It would be interesting to look into using old used ai servers for multi seat purposes.
I like systems that are maintence free and easily replaceable. My experience so far in software engineering is that technologies die, so it should also be easy to replace the tecnology, like the hardware it runs on, the platform/os, the programming language and the framework.
In the big companies I worked, it was easier to replace a system with all its dependencies than to remove a part of it. This had nothing to with tech. It was about getting buy in from the business stakeholders and the internal risk compliance department.
Why would you ever want a data structure that wraps around!? What a headache! Is it a memory constraint or optimization!? All I can think about is a physical knob where you want to know what position it is in.
They are efficient FIFOs (queues). You‘ll find them in many places. I know them from multimedia / audio, where you often have unsynchronized readers and writers.
In the audio domain, the reader and weiter are usually allowed to trample over each other. If you‘ve ever gamed on a PC, you might have heard this. When a game freezes, sometimes you hear a short loop of audio playing until the game unfreezes. That‘s a ringbuffer whose writer has stopped, but the async reader is still reading the entire buffer.
Zig‘s “There are too many ring buffer implementations in the standard library“ might also be interesting:
It's a somewhat different kind of ring buffer, because there's just one index, but I used it in my signal processing class for a finite-impulse-response filter.
Choose N to be a power of two >= the length of your filter.
Increment index i mod N, write the sample at buffer position x[i], output sum of x[i-k mod N] * a[k] where a[k] are your filter coefficients, repeat with next sample at next time step.
This post reminds me of some cognitive biases: That you spend two years on a software project that you wrote from scratch and are now at an abstraction level where you can implement new major features in hours. You now look at popular apps and think that you could probably replicate them in a weekend or so.
reply