Ok so Arch apparently has an install script that does everything[0]. I tried it the other day and it's pretty flawless, albeit terminal-based so not for everyone I guess.
Pacman is _amazing_. Apt broke dependencies for me every few months & a major version Ubuntu upgrade was always a reformat. Plus, obviously, the Arch wiki is something else. I would go as far as to say you'll have an overall better Linux experience on Arch than Ubuntu and friends, even as a beginner.
Possibly. If the installer happy path fails (which has happened to me), Arch is "here's root shell, figure it out", Ubuntu is slightly more user-friendly :)
I will say Arch wiki is amazing, even if you're not using it. I'm on Debian nowadays and still often refer it for random obscure hardware setup details.
Everyone says this but I have only ever used arch. Wiped windows and started with Manjaro. No VM to test straight to bare metal. I learned how Linux worked and then installed the base arch distro. If you can read a wiki, you can use arch. It's not rocket science. All the available arch flavored distros make it even easier today. I tried debian once and found it even more cumbersome. Is it apt or apt-get? is it install or update? Never stuck around to find out.
I started with Slackware Linux—something arguably even more “hard-core” than Arch.
What mattered most at the beginning was good installation documentation, and both Arch and Slackware delivered on that front. Slackware, however, had an additional appeal: it was intentionally simple, largely because it was created and maintained by a single person at the time. That simplicity made it feel conceivable that the system could be fundamentally understood by a single human mind.
Whether a newcomer appreciates the Slackware/Arch approach depends heavily on learning style and goals. You can click through a GUI installer and end up with a working distro, but then what? From a beginner’s perspective, you’ve just installed something somehow—and it looks like a crippled Windows machine with fewer buttons.
Starting with Slackware gave me a completely different starting reference point. Installing the system piece by piece was genuinely exciting, because every step involved learning what each component was and how it fit into the whole. The realization that Linux is essentially a set of Lego bricks—and that I might actually master the entire structure, or even build my own pieces—was deeply motivating.
That mindset was strongly shaped by how Slackware and similar distros present themselves. Even the lack of automatic dependency management acted as an early nudge toward thinking seriously about complexity, trade-offs, and minimalism, which stayed with me forever.
On Linux, there are 78 dynamically linked libraries, such as for X11, vector graphics, glib/gobjectlibgobject, graphics formats, crypto, encryption, etc.
Having an LLM spit out a few hundred lines of HTML and JavaScript is not a colossal waste of resources, it's equivalent to running a microwave for a couple of seconds.
* I name the django project "project"; so settings are project/settings.py, main urls are project/urls.py, etc
* I always define a custom Django user model even if I don't need anything extra yet; easier to expand later
* settings.py actually conflates project config (Django apps, middleware, etc) and instance/environment config (Database access, storages, email, auth...); I hardcode the project config (since that doesn't change between environemnts) and use python-dotenv to pull settings from environment / .env; I document all such configurable vars in .env.example, and the defaults are sane for local/dev setup (such as DEBUG=true, SQLIte database, ALLOWED_HOSTS=*, and a randomly-generated SECRET_KEY); oh and I use dj-database-url to use DATABASE_URL (defaults to sqlite:///sqlite.db)
* I immediately set up up ruff, ty, pytest, pre-commit hook and GH workflow to run ruff/ty/pytest
Previously I had elaborate scaffolding/skeleton templates, or nowadays a small shell script and I tell Claude to adapt settings.py as per above instructions :)
> * settings.py actually conflates project config (Django apps, middleware, etc) and instance/environment config (Database access, storages, email, auth...); I hardcode the project config (since that doesn't change between environemnts) and use python-dotenv to pull settings from environment / .env; I document all such configurable vars in .env.example, and the defaults are sane for local/dev setup (such as DEBUG=true, SQLIte database, ALLOWED_HOSTS=*, and a randomly-generated SECRET_KEY); oh and I use dj-database-url to use DATABASE_URL (defaults to sqlite:///sqlite.db)
There is a convention to create "foo_settings.py" for different environments next to "settings.py" and start it with "from .settings import *"
You'll still want something else for secrets, but this works well for everything else, including sane defaults with overrides (like DEBUG=False in the base and True in only the appropriate ones).
I'll add one; Add shell_plus. It makes the django shell so much nicer to use, especially on larger projects (mostly because it auto-imports all your models). IIRC, it involves adding ipython and django_extensions as a dependency, and then adding django-extensions (annoyingly, note that the underscore changes to a dash, this trips me up everytime I add it) to your installed apps.
Saying that, I'm sure django-extensions does a lot more than shell_plus but I've never actually explored what those extra features are, so think I'll do that now
Edit: Turns out you can use bpython, ptpython or none at all with shell_plus, so good to know if you prefer any of them to ipython
In the default shell? I've definitely started new django projects since 2023 and I seem to remember always having to use shell_plus for that, though maybe thats just become something I automatically add without thinking
Edit: Yep, you're right, wow thats pretty big for me
> use python-dotenv to pull settings from environment / .env
I disagree strongly with this one. All you are doing is moving those settings to a different file. You might as well use a local settings file that reads the common settings.
On production keep things like API keys that need to be kept secret elsewhere - as a minimum outside the project directories and owned by a different user.
Sure, that works as well, for example on some deploys I set the settings in systemd service file. However, it's more convenient to just have .env right there.
> On production keep things like API keys that need to be kept secret elsewhere - as a minimum outside the project directories and owned by a different user.
Curious what extra protection this gives you, considering the environment variables are, well, in the environment, and can be read by process. If someone does a remote code execution attack on the server, they can just read the environment.
The only thing I can imagine it does protect is if you mistakenly expose project root folder on the web server.
That's something that python-dotenv enables. It can pull from environment, which you can wire up from k8s secrets or whatever is the case for your hosting.
You still need clear separation between frontend and backend (react server components notwithstanding), so nothing's stopping you from using Python on the backend if you prefer it.
Django with DRF or django-ninja works really nice for that use case.
1. Make a schema migration that will work both with old and new code
2. Make a code change
3. Clean up schema migration
Example: deleting a field:
1. Schema migration to make the column optional
2. Remove the field in the code
3. Schema migration to remove the column
Yes, it's more complex than creating one schema migration, but that's the price you pay for zero-downtime. If you can relax that to "1s downtime midnight on sunday", you can keep things simpler. And if you do so many schema migrations you need such things often ... I would submit you're holding it wrong :)
I'm doing all of these and None of it works out of the box.
Adding a field needs a default_db, otherwise old-code fails to `INSERT`. You need to audit all the `create`-like calls otherwise.
Deleting similarly will make old-code fail all `SELECT`s.
For deletion I need a special 3-step dance with managed=False for one deploy. And for all of these I need to run old-tests on new-schema to see if there's some usage any member of our team missed.
AI is a technology. It has no goal. You use a tool, the tool doesn't use you or have goals or plans for you.
> In a world where there is no work for the common man.
"Work expands to fill the time available" (Parkinson's Law). Work hours haven't been reduced even though technology has advanced tremendously over the centuries (they have been reduced due to push for worker's rights).
> I'm afraid to imagine what is left there.
Do not define yourself, or your worth, through work. You work to live, not live to work.
> There must a ton of new full-web datasets out there, right?
Sadly, no. There's CommonCrawl (https://commoncrawl.org/) which still, sadly, far removed from "full-web dataset."
So everyone runs their own search instead, hammering the sites, going into gray areas (you either ignore robots.txt or your results suck), etc. It's a tragedy of the commons that keeps Google entrenched: https://senkorasic.com/articles/ai-scraper-tragedy-commons
On a tangent: the origin of the problems with low-quality drive-by requests is github's social nature. That might have been great when GitHub started, but nowadays many use it as portfolio padding and/or social proof.
"This person contributed to a lot of projects" heuristic for "they're a good and passionate developer" means people will increasingly game this using low-quality submissions. This has been happening for years already.
Of course, AI just added kerosene to the fire, but re-read the policy and omit AI and it still makes sense!
A long term fix for this is to remove the incentive. Paradoxically, AI might help here because this can so trivially be gamed that it's obvious it's not longer any kind of signal.
Your point about rereading without ai makes so much sense.
The economics of it have changed, human nature hasn’t. Before 2023 (?) people also submitted garbage PRs just to be able to add “contributed to X” to their CV. It’s just become a lot cheaper.
Let's not forget the Hacktober Fest, the scourge of open source for over a decade now, the driver of low-quality "contribution" spam by hordes of people doing it for a goddamn free t-shirt.
No, this problem isn't fundamentally about AI, it's about "social" structure of Github and incentives it creates (fame, employment).
Mailing lists essentially solve this by introducing friction: only those who genuinely care about the project will bother to git send-email and defend a patch over an email thread. The incentive for low-quality drive-by submissions also evaporates as there is no profile page with green squares to farm. The downside is that it potentially reduces the number of contributors by making it a lot harder for new contributors to onboard.
Ease in gently, with Ubuntu or Fedora. Get familiar. Then go crazy.
reply