> they found SO many inconsistencies between environments
This implies somebody with admin rights makes alterations in ad-hoc way without first doing it in test env.
If they continue with adhoc stuff, then it means auto-generated migrations will be different in test vs prod. (I prefer to test exactly same thing that will be used in prod)
> This implies somebody with admin rights makes alterations in ad-hoc way without first doing it in test env.
Not necessarily. With a large team/org using the same database schema, it can just mean multiple people were trying to make changes to an environment around the same time, e.g. the migrations were applied in a different order in staging vs prod.
Some migration tools provide extra checks for strict ordering, but many do not. There's often no guarantee that the migration file naming scheme ordering, Git commit ordering, and actual DB apply ordering line up -- that's 3 different possible sources of truth, or more since the DB state varies by environment (dev/stage/prod etc).
Late-night hot-fixes (to solve an emergency outage) can be another source of inconsistencies / drift.
> If they continue with adhoc stuff, then it means auto auto-generated migrations will be different in test vs prod
That depends on the declarative tool and whether it fully syncs the schema each time, or just generates migrations which are frozen into a plan which is executed as-is in all environments. Not that full-sync is bad, but yes in that case it will generate different things in each env. Although the end result is that it will solve the drift, and give you the same end state in all environments. And that's likely what you want to happen: after running the tool, the database state will match the desired state which was expressed by the CREATE statements in your schema repo.
That said, the declarative tooling should have sufficient safety checks to ensure it doesn't do anything destructive in prod without very loudly telling you and requiring manual confirmation. That way, you won't be harmed when trying to synchronize an environment that had unexpected out-of-band changes.
The reason a lot of people are unhappy about this notion is that it doesn't really matter: Any Turing complete system can emulate any other Turing complete system, and an LLM can trivially be made to execute a Turing machine if you put a loop around it, which means that unless you can find evidence humans exceed Turing computability AGI is "just" a question of scaling and training.
It could still turn out to be intractable without a better architecture, but the notion that it might not be impossible makes a lot of people very upset, and the only way it can be impossible even for just an LLM with a loop bolted on is if human brains can compute functions outside the Turing computable set.
"Llm thinks" is false advertising. (Maybe useful jargon, but still)
> Any Turing complete system can emulate any other Turing complete system, and an LLM can trivially be made to execute a Turing machine if you put a loop around it
Wouldn't it be more efficient to erase the LLM and use underlying hardware as Turing complete system?
BTW. Turing test is just admission that we have now way of defining human level intelligence apart from "you'll know it when you see it".
Hopefully constructive: Touch controls. If finger is lifted off, even for a second, new "center" is registered, which makes it quite hard to control without looking where the"center" is. Nice soundtrack, quite relaxing.
12 year old limitations are:
A. gets tired, needs sleep
B. I/O limited by muscles
Probably there are more, but if 12 year old could talk directly to electric circuits and would not need sleep or even a break, then that 12 year old would be leaps and bounds above the best human in his field of interest.
(Well motivation to finish the task is needed though)