CachyOS is basically arch on easy mode. I used to spend countless hours tinkering with arch but then I got older and don’t have much time. Plus there are helpful meta packages for gaming that work great out of the box, which for a gamer like me sans Windows is pretty awesome.
I've tinkered at most an hour on my arch install and it's just been running smoothly. The installer these days is easy to go through as well. It's the same for bash, very little customization, then it runs smoothly for years.
I'm not saying CachyOS is bad, just that it is in my opinion another layer of complexity that may change/deprecate/etc.
That’s fair, the more steps you take away from the source distribution, the more variances become a potential for trouble. Also it’s been over a decade since I used arch, it’s probably a whole lot better now.
>I used to spend countless hours tinkering with arch but then I got older and don’t have much time.
Have your lost your old configs that you worked hard on? That's a shame if so. I love moving my configs I've worked hard on to new machines and instantly getting up and running in a now-familiar environment. It saves so much time and effort.
Only tangentially related but maybe someone here can help me.
I have a server which has many peripherals and multiple GPUs. Now, I can use vfio and vfio-pcio to memory map and access their registers in user space. My question is, how could I start with kernel driver development? And I specifically mean the dev setup.
Would it be a good idea to use vfio with or without a vm to write and test drivers? How to best debug, reload and test changing some code of an existing driver?
Sine you are mounting and not syncing the files, what happens when you edit a file offline? And what if on another offline device the file is also edited?
Fair question. Conflicts happen, which I'm fine with.
Realistically speaking, most files I have in my cloud are read-only.
The most common file that I read-write on multiple devices is my keepass file, which supports conflict resolution (by merging changes) in clients.
Also used to happen when I tried editing some markdown notes using obsidian on PC, and then using text editor (or maybe obsidian again?) on android, but I eventually sort of gave up on that use-case. Editing my notes from my phone is sort of inconvenient anyway, so I mostly just create new short notes that I can later edit into some larger note, but honestly can't remember the last time this happened.
But yes, if not careful, you could run into your laptop overwriting the file when it comes online. In my case, it doesn't really happen, and when it does, Nextcloud will have the "overwritten version" saved, so I can always check what was overwritten and manually merge the changes.
P.S. If anyone wants to set this up, here's my nixos config for the service, feel free to comment on it:
# don't forget to run `rclone config` beforehand
# to create the "nextcloud:" remote
# some day I may do this declaratively, but not today
systemd.services.rclone-nextcloud-mount = {
# Ensure the service starts after the network is up
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
requires = [ "network-online.target" ];
# Service configuration
serviceConfig = let
ncDir = "/home/username/nextcloud";
mountOptions = "--vfs-cache-mode full --dir-cache-time 1w --vfs-cache-max-age 1w";
in {
Type = "simple";
ExecStartPre = "/run/current-system/sw/bin/mkdir -p ${ncDir}"; # Creates folder if didn't exist
ExecStart = "${pkgs.rclone}/bin/rclone mount ${mountOptions} nextcloud: ${ncDir}"; # Mounts
ExecStop = "/run/current-system/sw/bin/fusermount -u ${ncDir}"; # Dismounts
Restart = "on-failure";
RestartSec = "10s";
User = "username";
Group = "users";
Environment = [ "PATH=/run/wrappers/bin/:$PATH" ];
};
};
However, why is that even surprising? Tailwind is essentially a frontend css stylesheet. What business could there possibly be around that?
I understand, they have UI kits, books, etc. but just fundamentally, it was never going to be easy to monetize around that long term, with or without AI.
Tailwind also has a compiler of sorts (so you only include in the bundle the exact styles you need) and a bunch of tooling built around it. In an alternate universe it could have been a fully paid enterprise tool, but then it might not have caught on.
The comment you are responding to said their revenue is down 80%. So they did monetize training and services, and I don't see how that would have been a problem long term if AI didn't come along and make all of that unnecessary.
Yes. The point I was trying to make was that after the initial hype disappears, sales in those categories would probably taper off regardless. But it is purely my opinion.
This book is awesome and well worth reading. We used this at uni in my operating systems class. It was so interesting and good that I later picked it up both digitally and physically to reread at home.
Can you link to one that has individual virtual memory processes where the memory isn't freed? It sounds like what you're talking about is just leaking memory and processes have nothing to do with it.
virtual memory requires pages and this sucker doesn’t have them. Only a heap that you can use with heap_x.c
Everything is manual.
I get you people are trying to be cheeky and point out all modern OS’s don’t have this problem but C runs on a crap ton of other systems. Some of these “OS” are really nothing more than a coroutine from pid 0.
Yeah I think I get your problem. I am prototyping a message-passing actor platform running in a flat address space, and virtual memory is the only way I can do cleanup after a process ends (by keeping track of which pages were allocated to a process and freeing them when it terminates)
Without virtual memory, I would either need to force the use of a garbage collector (which is an interesting challenge in itself to design a GC for a flat address space full of stackless coroutines), or require languages with much stricter memory semantics such as Rust so I can be safe everything is released at the end (though all languages are designed for isolated virtual memory and not even Rust might help without serious re-engineering)
Do you keep notes of these types of platforms you’re working on? Sounds fun.
Not anything I can share. I’m trying to modernize these systems but man oh man was the early 80s tech brutal. Rust is something we looked into heavily and are trying to champion but bureaucracy prevents us. Flight Sims have to integrate with it in order to read/write data and it’s 1000x worse than SimConnect from MSFS.
The good news is that this work is dying out. There isn’t a need to modernize old war birds anymore.
Tbh on such a bare bones system I would use my own trivial arena bump allocator and only do a single malloc at startup and a single free before shutdown (if at all, because why even use the C stdlib on embedded systems instead of talking directly to the OS or hardware)
Why is something running on an rtos even able to leak memory?
If your design is going to be dirty, you've got to account for that.
In 30 years, I've never seen a memory leak in the wild.
Set up a memory pool, memory limits, garbage collectors or just switch to an OS/language that will better handle that for you.
Rust is favored among C++ users, but even Python could be a better fit for your use case.
I think the short answer is that it is very hard, time-consuming, and expensive to develop and prove out formal verification build/test toolchains.
I haven’t looked at C3 yet, but I imagine it can’t be used in a formally verified toolchain either unless the toolchain can compile the C3 bits somehow.
Are you really telling someone to 'correct their tone' because one of their many suggestions doesn't work on your mystery platform that you won't mention?
I don't see anything wrong with my tone. I could have been snarky about it.
I provided the C solutions as well but an interpreter written in C could at least allocate objects and threads within the interpreter context and not leak memory allowing you to restart it along any services within which is apparently better than whatever framework people sharing this sentiment are using.
I'm genuinely curious. What kind of mission-critical embedded real-time design dynamically(!) allocates objects and threads and then loses track of them?
PS: On topic, I really like the decisions made in C3
You drop a keyword and the aero-drones report. I do not mind it and I am not going to reply in kind.
I have 0 experience in aerospace but reading up on ARINC-653, it appears to mandate a reasonable RT design with threads and hard slices. Even comfortable with "partitions".
Where and why does the memory leak? If it is inherent in the mandated interfaces, you don't need to feel personally attacked.
If it is a layer laid down by your software –whether legacy or otherwise– why can't you keep track of allocations and ownership? Unless there are 200 bytes left and all slices are accounted for and running on the edge, I feel a solution could be worked out.
I wish you luck switching to Rust maybe a Rust2C translator could help.
I'm half convinced it's satire but I'll answer sincerely anyway.
As an adult I just couldn't be bothered buying this again year over year, let alone even once. I'm dropping the site instead of going to the store to buy this. Guess I'd just go fully offline.
Why would you need to buy it over and over again? Your age verification isn't going to become invalid as if you magically aged backwards. The time limit is (presumably) so the tokens can't be stored and resold on the black market indefinitely.
It is lexx/yacc style lexer and parser generation and generates an LR1 parser but using the CPCT+ algorithm for error recovery. Iirc the way it works is that when an error occurs, the nearest likely valid token is inserted, the error is recorded and parsing continues.
I would use this for anything that is simple enough and recursive descent for anything more complicated and where even more context is needed for errors.
I always feel that when saying lex/yacc style tools, it comes with a lot of preconceived notions that using the tools involves a slow development cycle with code gen + compilation steps.
What drew me to the grmtools (eventually contributing to it) was that you can evaluate grammars basically like an interpreter without going through that compilation process.
Leading to a fairly quick turnaround times during language development process.
I hope this year I can work on porting my grmtools based LSP to browser/wasm.
reply