Hacker Newsnew | past | comments | ask | show | jobs | submit | ICBTheory's commentslogin

Author here.

This paper is Part III of a trilogy investigating the limits of algorithmic cognition. Given the recent industry signals regarding "scaling plateaus" (e.g., Sutskever etc.), I attempt to formalize why these limits appear structurally unavoidable.

The Thesis: We model modern AI as a Probabilistic Bounded Semantic System (P-BoSS). The paper demonstrates via the "Inference Trilemma" that hallucinations are not transient bugs to be fixed by more data, but mathematical necessities when a bounded system faces fat-tailed domains (alpha ≤ 1).

The Proof: While this paper focuses on the CS implications, the underlying mathematical theorems (Rice’s Theorem applied to Semantic Frames, Sheaf Theoretic Gluing Failures) are formally verified using Coq.

You can find the formal proofs and the Coq code in the companion paper (Part II) here:

https://philpapers.org/rec/SCHTIC-16

I’m happy to discuss the P-BOSS definition and why probabilistic mitigation fails in divergent entropy regimes.


Since we can't avoid hallucinations, maybe we can live with them ?

I mean, I regularly use LLM's and although, sometimes, they go a bit mad, most of the time they're really helpful


I'd say that conclusion is a manifestation of pragmatic wisdom.

Anyway: I agree. The paper certainly doesn't argue that AI is useless, but that autonomy in high-stakes domains is mathematically unsafe.

In the text, I distinguish between operating on an 'Island of Order' (where hallucinations are cheap and correctable, like fixing a syntax error in code) versus navigating the 'Fat-Tailed Ocean' (where a single error is irreversible).

Tying this back to your comment: If an AI hallucinates a variable name — no problem, you just fix it. But I would advise skepticism if an AI suggests telling your boss that 'his professional expertise still has significant room for improvement.'

If hallucinations are structural (as the Coq proof in Part II indicates), then 'living with them' means ensuring the system never has the autonomy to execute that second type of decision.


Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.


1. On “The brain obeys physics, physics is computable—so AGI must be possible”

This is the classical foundational syllogism of computationalism. In short:

   1.The brain obeys the laws of physics.
   2.The laws of physics are (in principle) computable.
   3.Therefore, the brain is computable.
   4.Therefore, human-level general intelligence is computable, and AGI is  
     inevitable and a question of time, power and compute.
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.

The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.

Yes, we can simulate water. -> No, the simulation isn’t wet.

Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.

And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.

Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.

If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.


6. On “This is just a critique of current models—not AGI itself”

No.

This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.

If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure

…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.

That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)

In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc

That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.


5. On “Kolmogorov and Chaitin are misused”

It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.

But that’s not what’s happening here.

– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.

That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.

So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.


4. On “This is just the No Free Lunch Theorem again”

Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.

But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction

In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.

That is what “IOpenER” means: Information Opens, Entropy Rises.

It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?


3. On “He redefines AGI to make his result inevitable”

Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.

So unless those are now fringe newsletters, the definition stands:

- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -

If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.


2. On “This is just philosophy with no testability”

Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.

You don’t need empirical falsification to verify this. You need mathematical framing. Period.

Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.

These are not performance limitations. They are limits of principle.


And finally 7. On “But humans are finite too—so why not replicable?”

Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.

NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)

If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.

Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.

But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.

Let me know how that goes


Very good point.

I in fact had thought of describing the problem from a systems theoretical perspective as this is another way to combine different paths into a common principle

That was a sketch, in case you are into these kind of approaches:

2. Complexity vs. Complication In systems theory, the distinction between 'complex' and 'complicated' is critical. Complicated systems can be decomposed, mapped, and engineered. Complex systems are emergent, self-organizing, and irreducible. Algorithms thrive on complication. But general intelligence—especially artificial general intelligence (AGI)—must operate in complexity. Attempting to match complex environments through increased complication (more layers, more parameters) leads not to adaptation, but to collapse. 3. The Infinite Choice Barrier and Entropy Collapse In high-entropy decision spaces, symbolic systems attempt to compress possibilities into structured outcomes. But there is a threshold—empirically visible around entropy levels of H ≈ 20 (one million outcomes)—beyond which compression fails. Adding more depth does not resolve uncertainty; it amplifies it. This is the entropy collapse point: the algorithm doesn't fail because it cannot compute. It fails because it computes itself into divergence. 4. The Oracle and the Zufallskelerator To escape this paradox, the system would need either an external oracle (non-computable input), or pure chance. But chance is nearly useless in high-dimensional entropy. The probability of a meaningful jump is infinitesimal. The system becomes a closed recursion: it must understand what it cannot represent. This is the existential boundary of algorithmic intelligence: a structural self-block. 5. The Organizational Collapse of Complexity The same pattern is seen in organizations. When faced with increasing complexity, they often respond by becoming more complicated—adding layers, processes, rules. This mirrors the AI problem. At some point, the internal structure collapses under its own weight. Complexity cannot be mirrored. It must either be internalized—by becoming complex—or be resolved through a radically simpler rule, as in fractal systems or chaos theory.

6. Conclusion: You Are an Algorithm An algorithmic system can only understand what it can encode. It can only compress what it can represent. And when faced with complexity that exceeds its representational capacity, it doesn't break. It dissolves. Reasoning regresses to default tokens, heuristics, or stalling. True intelligence—human or otherwise—must either become capable of transforming its own frame (metastructural recursion), or accept the impossibility of generality. You are an algorithm. You compress until you can't. Then you either transform, or collapse


Just to make sure I understand:

–Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted? Or better: is that metaphysical setup an argument?

If that’s the game, fine. Here we go:

– The claim that one can build a true, perfectly detailed, exact map of reality is… well... ambitious. It sits remarkably far from anything resembling science , since it’s conveniently untouched by that nitpicky empirical thing called evidence. But sure: freed from falsifiability, it can dream big and give birth to its omnicartographic offspring.

– oh, quick follow-up: does that “perfect map” include itself? If so... say hi to Alan Turing. If not... well, greetings to Herr Goedel.

– Also: if the world only shows itself through perception and cognition, how exactly do you map it “as it truly is”? What are you comparing your map to — other observations? Another map?

– How many properties, relations, transformations, and dimensions does the world have? Over time? Across domains? Under multiple perspectives? Go ahead, I’ll wait... (oh, and: hi too.. you know who)

And btw the true detailed map of the world exists.... It’s the world.

It’s just sort of hard to get a copy of it. Not enough material available ... and/or not enough compute....

P.S. Sorry if that came off sharp — bit of a spur-of-the-moment reply. If you want to actually dig into this seriously, I’d be happy to.


> Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted?

If you are claiming that human intelligence is not "general", you'd better put a huge disclaimer on your text. You are free to redefine words to mean whatever you want, but if you use something so different from the way the entire world uses it, the onus is on you to make it very clear.

And the alternative is you claiming human intelligence is impossible... what would make your paper wrong.


I don't think that's a redefinition. "general" in common usage refers to something that spans all subtypes. For humans to be generally intelligent there would have to be no type of intelligence that they don't exhibit, that's a bold claim.


I mean, I think it is becoming increasingly obvious humans aren't doing as much as we thought they were. So yes, this seems like an overly ambitious definition of what we would in practice call agi. Can someone eli5 the requirement this paper puts on something to be considered a gi?


I'm not sure I got the details right, but the paper seems to define "general" as in capable of making a decision rationally following a set of values in any computable problem-space.

If I got that right, yeah, humans absolutely don't qualify. It's not much of a jump to discover it's impossible.


Appreciate the response, and apologies for being needlessly sharp myself. Thank you for bringing the temperature down.

> Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted?

The formality of the paper already supposes a level of rigor. The problem at its core, is that p_intelligent(x: X) where X ∈ {human, AI} is not a demonstrable scissor by just proving p_intelligent(AI) = false. Without walking us through the steps that p_intelligent(human) = true, we cannot be sure that the predicate isn't simply always false.

Without demonstrating that humans satisfy the claims we can't be sure if the results are vacuously true because nothing, in fact, can satisfy the standard.

These aren't heroic refutations, they're table stakes.


Thanks — and yes, Penrose’s argument is well known.

But this isn’t that, as I’m not making a claim about consciousness or invoking quantum physics or microtubules (which, I agree, are highly speculative).

The core of my argument is based on computability and information theory — not biology. Specifically: that algorithmic systems hit hard formal limits in decision contexts with irreducible complexity or semantic divergence, and those limits are provable using existing mathematical tools (Shannon, Rice, etc.).

So in some way, this is the non-microtubule version of AI critique. I don’t have the physics background to engage in Nobel-level quantum speculation — and, luckily, it’s not needed here.


Seems like all you needed to prove the general case is Goedelian incompleteness. As with incompleteness, entropy-based arguments may never actually interfere with getting work done in the real world with real AI tools.


I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.


Why can't it be algorithmic? If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.

Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.


Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.


Why can't it be algorithmic?

Why do you think it mustn't be algoritmic?

Why do you think humans are capable of doing anything that isn't algoritmic?

This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.


Hi and thanks for engaging :-)

Well, it in fact depends on what intelligence is to your understanding:

-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.

- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.

- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.

The main point is: neither algorithms nor rationality can point beyond itself.

In other words: You cannot think out of the box - thinking IS the box.

(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)


Let me steal another users alternate phrasing: Since humans and computers are both bound by the same physical laws, why does your proof not apply to humans?


Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving. (And also: I am bound by thermodynamics as my mother in Law is, still i get disarranged by her mere presence while I always have to put laxatives in her wine to counter that)

2. human rationality is equally limited as algorithms. Neither an algorithm nor human logic can find itself a path from Newton to Einsteins SR. Because it doesn't exist.

3. Physical laws - where do they really come from? From nature? From logic? Or from that strange thing we do: experience, generate, pattern, abstract, express — and try to make it communicable? I honestly don’t know.

In a nutshell: there obviously is no law that forbids us to innovate - we do this, quite often. There only is a logical boundary, that says that there is no way to derive something out of a something that is not part of itself - no way for thinking to point beyond what is thinkable.

Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?" ... i guess "interesting thought" would not have been the probable answer... rather something like "have you been drinking? Stop doing that mental crap - go away, you little moron!"


> Why? 1. Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

You seem to be laboring under the mistaken idea that "algorithmic" does not encompass everything allowed by physics. But, humoring this idea, then if physical laws allow it, why can this "more than algorithmic" cognition not be done artificially? As you say - we can obviously do it. What magical line is preventing an artificial system from doing the same?


If by algorithmic you just mean anything that a Turing machine can do, then your theorem is asserting that the Church-Turing thesis isn't true.

Why not use that as the title of your paper? That a more fundamental claim.


The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.

But it is the fundamental objection he would need to overcome.

There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.


> Basically because physical laws obviously allow more than algorithmic cognition and problem solving.

This is not obvious at all. Unless you can prove that humans can compute functions beyond the Turing computable, there is no basis for thinking that humans embody and physics that "allow more than algorithmic cognition".

Your claim here also goes against the physical interpretation of the Church-Turing thesis.

Without rigorously addressing this, there is no point taking your papers seriously.


No problem here is you proof - although a bit long:

1. THEOREM: Let a semantic frame be defined as Ω = (Σ, R), where

Σ is a finite symbol set and R is a finite set of inference rules.

Let Ω′ = (Σ′, R′) be a candidate successor frame.

Define a frame jump as: Frame Jump Condition: Ω′ extends Ω if Σ′\Σ ≠ ∅ or R′\R ≠ ∅

Let P be a deterministic Turing machine (TM) operating entirely within Ω.

Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

(Whereas Σ = the set of all finite symbol strings in the frame; derivable outputs are formed from Σ under the inference rules R.)

Proof Sketch: P’s tape alphabet is fixed to Σ and symbols derived from Σ. By induction, no computation step can introduce a symbol not already in Σ. ∎

2. APPLICATION: Newton → Special Relativity

Let Σᴺ = { t, x, y, z, v, F, m, +, · } (Newtonian Frame) Let Σᴿ = Σᴺ ∪ { c, γ, η(·,·) } (SR Frame)

Let φ = “The speed of light is invariant in all inertial frames.” Let Tᴿ be the theory of special relativity. Let Pᴺ be a TM constrained to Σᴺ.

By Lemma 1, Pᴺ cannot emit any σ ∉ Σᴺ.

But φ ∈ Tᴿ requires σ ∈ Σᴿ \ Σᴺ

→ Therefore Pᴺ ⊬ φ → Tᴿ ⊈ L(Pᴺ)

Thus:

Special Relativity cannot be derived from Newtonian physics within its original formal frame.

3. EMPIRICAL CONFLICT Let: Axiom N₁: Galilean transformation (x′ = x − vt, t′ = t) Axiom N₂: Ether model for light speed Data D: Michelson–Morley ⇒ c = const

In Ωᴺ, combining N₁ and N₂ with D leads to contradiction. Resolving D requires introducing {c, γ, η(·,·)}, i.e., Σᴿ \ Σᴺ But by Lemma 1: impossible within Pᴺ. -> Frame must be exited to resolve data.

4. FRAME JUMP OBSERVATION

Einstein introduced Σᴿ — a new frame with new symbols and transformation rules. He did so without derivation from within Ωᴺ. That constitutes a frame jump.

5. FINALLY

A: Einstein created Tᴿ with Σᴿ, where Σᴿ \ Σᴺ ≠ ∅

B: Einstein was human

C: Therefore, humans can initiate frame jumps (i.e., generate formal systems containing symbols/rules not computable within the original system).

Algorithmic systems (defined by fixed Σ and R) cannot perform frame jumps. But human cognition demonstrably can.

QED.

BUT: Can Humans COMPUTE those functions? (As you asked)

-> Answer: a) No - because frame-jumping is not a computation.

It’s a generative act that lies outside the scope of computational derivation. Any attempt to perform frame-jumping by computation would either a) enter a Goedelian paradox (truth unprovable in frame),b) trigger the halting problem , or c) collapse into semantic overload , where symbols become unstable, and inference breaks down.

In each case, the cognitive system fails not from error, but from structural constraint. AND: The same constraint exists for human rationality.


Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.

This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.

Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.

Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.

Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.


The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively. You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it. This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.” If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...


> It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅

This is trivially false. For any TM with such an alphabet, you can run a program that simulates a TM with an alphabet that includes Σ′.


> Let a semantic frame be defined as Ω = (Σ, R)

But if we let an AGI operate on Ω2 = (English, Science), that semantic frame would have encompassed both Newton and Einstein.

Your argument boils down into one specific and small semantic frame not being general enough to do all of AGI, not that _any_ semantic frame is incapable of AGI.

Your proof only applies to the Newtonian semantic frame. But your claim is that it is true for any semantic frame.


Yes, of course — if you define Ω² as “English + All of Science,” then congratulations, you have defined an unbounded oracle. But you’re just shifting the burden.

No sysem starting from Ω₁ can generate Ω₂ unless Ω₂ is already implicit. ... If you build a system trained on all of science, then yes, it knows Einstein because you gave it Einstein. But now ask it to generate the successor of Ω² (call it Ω³ ) with symbols that don’t yet exist. Can it derive those? No, because they’re not in Σ². Same limitation, new domain. This isn’t about “a small frame can’t do AGI.” It’s about every frame being finite, and therefore bounded in its generative reach. The question is whether any algorithmic system can exeed its own Σ and R. The answer is no. That’s not content-dependent, that’s structural.


None of this is relevant to what I wrote. If anything, they sugget that you don't understand the argument.

If anything, your argument is begging the question - it's a logical fallacy - because your argument rests on humans exceeding the Turing computable, to use human abilities as evidence. But if humans do not exceed the Turing computable, then everything humans can do is evidence that something is Turing computable, and so you can not use human abilities as evidence something isn't Turing computable.

And so your reasoning is trivially circular.

EDIT:

To go into more specific errors, this is fasle:

> Let P be a deterministic Turing machine (TM) operating entirely within Ω.

>

> Then: Lemma 1 (Symbol Containment): For any output L(P) ⊆ Σ, P cannot emit any σ ∉ Σ.

P can do so by simulating a TM P' whose alphabet includes σ. This is fundamental to the theory of computability, and holds for any two sets of symbols: You can always handle the larger alphabet by simulating one machine on the other.

When your "proof" contains elementary errors like this, it's impossible to take this seriously.


You’re flipping the logic.

I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Then I look at real-world examples (Einstein is just one) where new symbols, concepts, and transformation rules appear that were not derivable within the predecessor frame. You can claim, philosophically (!), that “well, humans must be computable, so Einstein’s leap must be too.” Fine. But now you’re asserting that the uncomputable must be computable because humans did it. That’s your circularity, not mine. I don’t claim humans are “super-Turing.” I claim that frame-jumping is not computation. You can still be physical, messy, and bounded .. and generate outside your rational model. That’s all the proof needs.


No, I'm not flipping the logic.

> I’m not assuming humans are beyond Turing-computable and then using that to prove that AGI can’t be. I’m saying: here is a provable formal limit for algorithmic systems ->symbolic containment. That’s theorem-level logic.

Any such "proof" is irrelevant unless you can prove that humans can exceed the Turing computable. If humans can't exceed the Turing computable, then any "proof" that shows limits for algoritmic systems that somehow don't apply to humans must inherently be incorrect.

And so you're sidestepping the issue.

> But now you’re asserting that the uncomputable must be computable because humans did it.

No, you're here demonstrating you failed to understand the argument.

I'm asserting that you cannot use the fact that humans can do something as proof that humans exceed the Turing computable, because if humans do not exceed the Turing computable said "proof" would still give the same result. As such it does not prove anything.

And proving that humans exceed the Turing computable is a necessary precondition for proving AGI impossible.

> I don’t claim humans are “super-Turing.”

Then your claim to prove AGI can't exist is trivially false. For it to be true, you would need to make that claim, and prove it.

That you don't seem to understand this tells me you don't understand the subject.

(See also my edit above; your proof also contains elmentary failures to understand Turing machines)


You’re misreading what I’m doing, and I suspect you’re also misdefining what a “proof” in this space needs to be.

I’m not assuming humans exceed the Turing computable. I’m not using human behavior as a proof of AGI’s impossibility. I’m doing something much more modest - and much more rigorous.

Here’s the actual chain:

1. There’s a formal boundary for algorithmic systems. It’s called symbolic containment. A system defined by a finite symbol set Σ and rule set R cannot generate a successor frame (Σ′, R′) where Σ′ introduces novel symbols not contained in Σ. This is not philosophy — this is structural containment, and it is provable.

2. Then I observe: in human intellectual history, we find recurring examples of frame expansion. Not optimization, not interpolation — expansion. New primitives. New rules. Special relativity didn’t emerge from Newton through deduction. It required symbols and structures that couldn’t be formed inside the original frame.

3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

4. This leads to a conclusion: if AGI is an algorithmic system (finite symbols, finite rules, formal inference)then it will not be capable of frame jumps.And it is not incapable of that, because it lacks compute. The system is structurally bounded by what it is.

So your complaint that I “haven’t proven humans exceed Turing” is misplaced. I didn’t claim to. You’re asking me to prove something that I simply don’t need to assert .

I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed). Therefore, if humans are purely algorithmic, something’s missing in our understanding of how those systems operate. And if AGI remains within the current algorithmic paradigm, it will not do X. That’s what I’ve shown.

You can still believe humans are Turing machines, fine for me. But if this belief is to be more than some kind of religious statement, then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅. It is you that would need to show how uncomputable concepts emerge from computable substrates without violating containment (->andthat means: witout violating its own logic - as in formal systems, logic and containment end up as the same thing: Your symbol set defines your expressive space, step outside that, and you’re no longer reasoning — you’re redefining the space, the universe you’re reasoning in).

Otherwise, the limitation stands — and the claim that “AGI can do anything humans do” remains an ungrounded leap of faith.

Also: if you believe the only valid proof of AGI impossibility must rest on metaphysical redefinition of humanity as “super-Turing,” then you’ve set an artificial constraint that ensures no such proof could ever exist, no matter the logic.

That’s intellectually trading epistemic rigor for insulation.

As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.


There's nothing rigorous about this. It's pure crackpottery.

As long as you claim to disprove AGI, it inherently follows that you need to prove that humans exceed the Turing computable to succeed. Since you specifically state that you are not trying to prove that humans exceed the Turing computable, you're demonstrating a fundamental lack of understanding of the problem.

> 3. That’s not “proof” that humans exceed the Turing computable. That’s empirical evidence that human cognition appears to do something algorithmic systems, as formally defined, cannot do.

This is only true if humans execeed the Turing computable, as otherwise humans are proof that this is something that an algorithmic system can do. So despite claiming that you're not trying to prove that humans execeed the Turing computable, you are making the claim that humans can.

> I’m saying: algorithmic systems can’t do X (provable), and humans appear to do X (observed).

This is a direct statement that you claim that humans are observed to exceed the Turing computable.

> then it is you that would need to explain how a Turing machine bounded to Σ can generate Σ′ with Σ′ \ Σ ≠ ∅

This is fundamental to Turing equivalence. If there exist any Turing machine that can generate Σ′, then any Turing machine can generate Σ′.

Anything that is possible with any Turing machine, in fact, is possible with a machine with as few as 2 symbols (the smallest (2,3) Turing machine is usually 2 states and 3 symbols, but per Shannon you can always trade states for symbols, and so a (3,2) Turing machine is also possible). This is because you can always simulate an environment where a larger alphabet is encoded with multiple symbols.

> As for your claim that I misunderstand Turing machines, please feel free to show precisely which part fails. The statement that a TM cannot emit symbols not present in its alphabet is not a misunderstanding — it’s the foundation of how TMs are defined. If you think otherwise, then I would politely suggest you review the formal modl again.

This is exactly the part that fails.

Any TM can simulate any other, and that by extension, any TM can be extended to any alphabet through simulation.

If you don't understand this, then you don't understand the very basics of Turing Machines.


“Imagine little Albert asking his physics teacher in 1880: "Sir - for how long do I have to stay at high speed in order to look as grown up as my elder brother?"”

Is that not the other way around? “…how long do I have to stay at high speed in order for my younger brother to look as grown up as myself?”


Staying at high speed is symmetric! You'd both appear to age slower from the other's POV. It's only if one brother turns around and comes back, therefore accelerating, that you get an asymmetry.


Indeed. One of my other thoughts here on the Relativity example was "That sets the bar high given most humans can't figure out special relativity even with all the explainers for Einstein's work".

But I'm so used to AGI being conflated with ASI that it didn't seem worth it compared to the more fundamental errors.


Given rcxdude’s reply it appears I am one of those humans who can’t figure out special relativity (let alone general)

Wrt ‘AGI/ASI’, while they’re not the same, after reading Nick Bostrom (and more recently https://ai-2027.com) I hang towards AGI being a blib on the timeline towards ASI. Who knows.


The standard model is computable, so no. Physical law does not allow for non-computable behavior.


This paper presents a theoretical proof that AGI systems will structurally collapse under certain semantic conditions — not due to lack of compute, but because of how entropy behaves in heavy-tailed decision spaces.

The idea is called IOpenER: Information Opens, Entropy Rises. It builds on Shannon’s information theory to show that in specific problem classes (those with α ≤ 1), adding information doesn’t reduce uncertainty — it increases it. The system can’t converge, because meaning itself keeps multiplying.

The core concept — entropy divergence in these spaces — was already present in my earlier paper, uploaded to PhilArchive on June 1. This version formalizes it. Apple’s study, The Illusion of Thinking, was published a few days later. It shows that frontier reasoning models like Claude 3.7 and DeepSeek-R1 break down exactly when problem complexity increases — despite adequate inference budget.

I didn’t write this paper in response to Apple’s work. But the alignment is striking. Their empirical findings seem to match what IOpenER predicts.

Curious what this community thinks: is this a meaningful convergence, or just an interesting coincidence?

Links:

This paper (entropy + IOpenER): https://philarchive.org/archive/SCHAIM-14

First paper (ICB + computability): https://philpapers.org/archive/SCHAII-17.pdf

Apple’s study: https://machinelearning.apple.com/research/illusion-of-think...


I am sympathetic to the kind of claims made by your paper. I like impossibility results and I could believe that for some definition of AGI there is at least a plausible argument that entropy is a problem. Scalable quantum computing is a good point of comparison.

But your paper is throwing up crank red flags left and right. If you have a strong argument for such a bold claim, you should put it front and centre: give your definition of AGI, give your proof, let it stand on its own. Some discussion of the definition is useful. Discussion of your personal life and Kant is really not.

Skimming through your paper, your argument seems to boil down to "there must be some questions AGI gets wrong". Well since the definition includes that AGI is algorithmic, this is already clear thanks to the halting problem.


Thanks for this - Looking forward to reading the full paper.

That said, the most obvious objection that comes to mind about the title is that … well, I feel that I’m generally intelligent, and therefore general intelligence of some sort is clearly not impossible.

Can you give a short précis as to how you are distinguishing humans and the “A” in artificial?


That about ‘cogito ergo sums it up’ doesn’t it?

Intelligence is clearly possible. My gut feeling is our brain solves this by removing complexity. It certainly does so, continuously filtering out (ignoring) large parts of input, and generously interpolating over gaps (making stuff up). Whether this evolved to overcome this theorem I am not intelligent enough to conclude.


catoc states, amongst other things, that: >"Intelligence is clearly possible."<

Perhaps not a citation but a proof is required here!


Clearly possible in humans - the statement in the parent I was replying to.

I would indeed definitely like to see proof - mathematical or applied - of in silico intelligence


Do you expect silicon to be less capable of the necessary computation than cells?


At best, I'm asking for a clear scientific definition of intelligence. At worst I'm questioning its very existence.


If you haven't yet encountered it, check out Michael Levin's lab and work. Among other things, they are trying to figure out what "basal cognition" is; the idea being that even if we can't point to some dividing line in the end, we'll have a better understanding of what cognition is and where it shows up. And it shows up surprisingly far down!

The definition of "intelligence" that he works with comes from William James: the ability to achieve the same goal by different means. It's a useful definition, given the remarkable stuff coming out of his lab.


I would indeed definitely like to see proof - of intelligence!


Sure I can (and thanks for writing)

Well, given the specific way you asked that question I confirm your self assertion - and am quite certain that your level of Artificiality converges to zero, which would make you a GI without A...

- You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity

A "précis" as you wished: Artificial — in the sense used here (apart from the usual "planfully built/programmed system" etc.) — algorithmic, formal, symbol-bound.

Humans as "cognitive system" have some similar traits of course - but obviously, there seems to be more than that.


>but obviously, there seems to be more than that.

I don't see how that's obvious. I'm not trying to be argumentative here, but it seems like these arguments always come down to a qualia, or the insistence that humans have some sort of 'spark' that machines don't have, therefore: AGI is not possible since machines don't have it.

I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?

What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of?


> I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?

It doesn't follow.

Trivially demonstrated by the early LLM that got Blake Lemonie to break his NDA also emitting words which suggested to Lemonie that the LLM had an inner life.

Or, indeed, the output device y'all are using to read/listening to my words, which is also successfully emitting these words despite the output device very much only following an algorithm that simply recreates what it was told to recreate. "Ceci n'est pas une pipe", etc. https://en.wikipedia.org/wiki/The_Treachery_of_Images


Oh no, I am not at all trying to find an explanation of why this is (qualia etc.). There is simply no necessity for that. It is interesting, but not part of the scientific problem that i tried to find an answer to.

The proof (all three of them) holds without any explanatory effort concerning causalities around human frame-jumping etc.

For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).


> this cannot be reached algorithmically

> humans can (somehow) do this

Is this not contradictory?

Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?

Or at minimum presupposes that humans are more than just a biochemical machine. But then the question comes up again, where is the scientific evidence for this? In my view it's perfectly acceptable if the answer is something to the effect of "we don't currently have evidence for that, but this hints that we ought to look for it".

All that said, does "algorithmically" here perhaps exclude heuristics? Many times something can be shown to be unsolvable in the absolute sense yet readily solvable with extremely high success rate in practice using some heuristic.


OP seems to have a very confused idea of what an algorithmic process means... they think the process of humans determining what is truthful "cannot possibly be something algorithmic".

Which is certainly an opinion.

> whatever it is: it cannot possibly be something algorithmic

https://news.ycombinator.com/item?id=44349299

Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.


> Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.

Making non-standard definitions of words isn't necessarily bad, and can be useful in certain texts. But if you do so, you need to make these definitions front-and-centre instead of just casually assuming your readers will share your non-standard meaning.

And where possible, I would still use the standard meanings and use newly made up terms to carry new concepts.


Maybe you need to update an outdated model?

Nothing in physics requires us to use your prior experience as some special epoch.

Meaning is mutable social relationship as language meaning is not immutable physics.


The model I am using is the conventional understanding of physics. What model are you using?

> language meaning is not immutable physics.

Our understanding of physics is not complete, so why would our model of it be final? No one is saying it is.

Everything we currently know about physics, all the experiments we've conducted, suggests the physical church turing thesis is true.

If you want to claim that the last x% of our missing knowledge will overturn everything and reality is in fact not computable, you are free to do so, and this may well even be true.

But so far the evidence is not in your favor and you'd do well to acknowledge that.


> Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?

No, computation is algorithmic, real machines are not necessarily (of course, AGI still can't be ruled out even if algorithmic intelligence is, only AGI that does not incorporate some component with noncomputable behavior.)


> computation is algorithmic, real machines are not necessarily

Author seems to assume the latter condition is definitive, i.e. that real machines are not, and then derive extrapolations from that unproven assumption.


> No, computation is algorithmic, real machines are not necessarily

As the adjacent comment touches on are the laws of physics (as understood to date) not possible to simulate? Can't all possible machines be simulated at least in theory? I'm guessing my knowledge of the term "algorithmic" is lacking here.


As far as we can tell, all the known laws of nature are computable. And I think most of them are even efficiently computable, especially if you have a quantum computer.

Quantum mechanics is even linear!

Fun fact, quantum mechanics is also deterministic, if you stay away from bonkers interpretations like Copenhagen and stick to just the theory itself or saner interpretations.


Using computation/algorithmic methods we can simulate nonalgorithmic systems. So the world within a computer program can behave in a nonalgorithmic way.

Also, one might argue that universe/laws of physics are computational.


> Also, one might argue that universe/laws of physics are computational.

Maybe we need to define "computational" before moving on. To me this echoes the clockwork universe of the Enligthenment. Insights of quantum physics have shattered this idea.


> Insights of quantum physics have shattered this idea.

Not at all. Quantum mechanics is fully deterministic, if you stay away from bonkers interpretations like Copenhagen.

And, of course, you can simulate random processes just fine even on a deterministic system use a pseudo random number generator or you can just connect a physical hardware random number generator to your otherwise deterministic system. Compared to all the hardware used in our LLMs so far, random number cards are cheap kit.

Though I doubt a hardware random number generator will make the difference between dumb and intelligent systems: pseudo random number generators are just too good, and generalising a bit you'd need P=NP to be true for your system to behave differently with a good PRNG vs real random numbers.


You can simulate a nondeterministic process. There's just no way to consistently get a matching outcome. It's no different than running the process itself multiple times and getting different outputs for the same inputs.


> For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).

The problem with these kinds of arguments is always that they conflate two possibly related but non-equivalent kinds of computational problem solving.

In computability theory, an uncomputability result essentially only proves that it's impossible to have an algorithm that will in all cases produce the correct result to a given problem. Such an impossibility result is valuable as a purely mathematical result, but also because what computer science generally wants is a provably correct algorithm: one that will, when performed exactly, always produce the correct answer.

However, similarly to any mathematical proof, a single counter-example is enough to invalidate a proof of correctness. Showing that an algorithm fails in a single corner case makes the algorithm not correct in a classical algorithmic sense. Similarly, for a computational problem, showing that any purported algorithm will inevitably fail even in a single case is enough to prove the problem uncomputable -- again, in the classical computability theory sense.

If you cannot have an exact algorithm, for either theoretical or practical reasons, and you still want a computational method for solving the problem in practice, you then turn to heuristics or something else that doesn't guarantee correctness but which might produce workable results often enough to be useful.

Even though something like the halting problem is uncomputable in the classical, always-inevitably-produces-correct-answer-in-finite-time sense, that does not necessarily stop it from being solved in a subset of cases, or to be solved often enough by some kind of a heuristic or non-exact algorithm to be useful.

When you say that something cannot be reached algorithmically, you're saying it's impossible to have an algorithm that would inevitably, systematically, always reach that solution in finite time. And you would in many cases be correct. Symbolic AI research ran into this problem due to the uncomputability of reasoning in predicate logic. (Uncomputability is not the main problem that symbolic AI ran into but it was one of them.)

The problem is that when you say that humans can somehow do this computationally impossible thing, you're not holding human cognition or problem solving to the same standard of computational correctness. We do find solutions to problems, answers to questions, and logical chains of reasoning, but we aren't guaranteed to.

You do seem to be aware of this, of course.

But you then run into the inevitable question of what you mean by AGI. If you hold AGI to the standard of classical computational correctness, to which you don't hold humans, you're correct that it's impossible. But you have also proven nothing new.

A more typical understanding of AGI would be something similar to human cognition -- not having formal guarantees but working well enough for operating in, understanding, or producing useful results the real world. (Human brains do that well in the real world -- thanks to having evolved in it!)

In the latter case, uncomputability results do not prove that kind of AGI to be impossible.


Indeed. And it's fairly trivial to see that computability isn't the right lens to view intelligence through:

The classic Turing test takes place over a finite amount of time. Normally less than an hour, but we can arbitrarily give the interlocutor, say, up to a week. If you don't like the Turing test, then just about any other test interaction we can make the system undergo will conclude below some fixed finite time. After all, humans are generally intelligent, even if they only get a handful of decades to prove it.

During that finite time interaction, only a finite amount of interaction will be exchanged.

Now in principle a system could have a big old lookup table with all prefixes of all possible interactions as keys, and values are probability distributions for what to send back next (and how long to wait before sending the reply). That table would be finite. And thus following it would be computable.

Of course, the table would be more than astronomical in size, and utterly impossible to manifest in our physical universe. But computability is too blunt an instrument to formalise this with.

In the real universe, you would need to _compress_ that table somehow, eg in a human brain or perhaps in an LLM or so. And then you need to be able to efficiently uncompress the parts of the table you need to produce the replies. Whether that's possible and how are all questions of complexity theory, not computability.

See Scott Aaronson's excellent 'Why Philosophers Should Care About Computational Complexity': https://arxiv.org/abs/1108.1791


Consciousness is an issue. If you write a program to add 2+2, you probably do not believe some entity poofs into existence, perceives itself as independently adding 2+2, and then poofs out of existence. Yet somehow, the idea of an emergent consciousness is that if you instead get it to do 100 basic operations, or perhaps 2^100 then suddenly this becomes true? The reason one might believe this is not because it's logical or reasonable - or even supported in any way, but because people assume their own conclusion. In particular if one takes a physicalist view of the universe then consciousness must be a physical process and so it simply must emerge at some sufficient degree of complexity.

But if you don't simply assume physicalism then this logic falls flat. And the more we discover about the universe, the weirder things become. How insane would you sound not that long ago to suggest that time itself would move at different rates for different people at the same "time", just to maintain a perceived constancy of the speed of light? It's nonsense, but it's real. So I'm quite reluctant to assume my own conclusion on anything with regards to the nature of the universe. Even relatively 'simple' things like quantum entanglement are already posing very difficult issues for a physicalist view of the universe.


>Yet somehow, the idea of an emergent consciousness is that if you instead get it to do 100 basic operations, or perhaps 2^100 then suddenly this becomes true

Why not? You can do a simple add with assembly language in a few operations. But if you put millions and millions of operations together you can get a video game with emergent behaviors. If you're just looking at the additions, where does the game come from? Is it still a game if it's not output to a monitor but an internal screen buffer?


You're not speaking of a behavior but of a "thing." Your consciousness sits idly inside your body, feeling as thought it's the driving all actions of its free will. There's no necessity, reason, or logical explanation for this thing to exist, let alone why or where it comes from.

No matter how many instructions you might use to create the most compelling simulation of a dragon in a video game, neither that dragon or any part of it is going to poof into existence. I'm sure this is something everybody would agree with. Yet with consciousness you want to claim 'well except its consciousness, yeah that'll poof into existence.' The assumption of physicalism ends up requiring people to make statements that they themselves would certainly call absurd if not for the fact that they are forced to make such statements because of said assumption!

And what is the justification for said assumption? There is none! As mentioned already quantum entanglement is posing major issues for physicalism, and I suspect we're really only just beginning to delve into the bizarro nature of our universe. So people embrace physicalism purely on faith.


>There's no necessity, reason, or logical explanation for this thing

I mean, I disagree. It's a internal virtual 'playground' you can bounce ideas off of and reason against. Obviously it imparts some survival benefits to creatures that have one at this point in evolution.


This gets to the issue. What is bouncing ideas off of yourself and reasoning against such? Well it's nothing particularly complex. A conditional is its most fundamental incarnation - add some variables and weights and you have just what you described in a few lines of code. Of course you don't think this poofs a consciousness into existence.

For consciousness to be emergent at some point there has to be wild hand-waving of 'well you see, it just needs to be more complex.' But any program is fundamentally nothing more than a simple set of instructions, so it all comes down to this issue. And if I hit a breakpoint and pause, and then start stepping through the assembly - ADD, MUL, CMP. Is the consciousness still imagining itself doing those things? Or does it just somehow disappear when I start stepping through instructions?

For even the most complex visual or behavior, you can stair step, quite rapidly, down to a very simple set of instructions. And no where in these steps is there any logical room for a consciousness to just suddenly appear.


My issue is that from a scientific point of view, physicalism is all we have. Everything else is belief, or some form of faith.

Your example about relativity is good. It might have sounded insane at some point, but it turns out, it is physics, which nicely falls into the physicalism concept.

If there is a falsifiable scientific theory that there is something other than a physical mechanism behind consciousness and intelligence, I haven't seen it.


I don't think science and consciousness go together quite well at this point. I'll claim consciousness doesn't exist. Try to prove me wrong. Of course I know I'm wrong because I am conscious, but that's literally impossible to prove, and it may very well be that way forever. You have no way of knowing I'm conscious - you could very well be the only conscious entity in existence. This is not the case because I can strongly assure you I'm conscious as well, but a philosophical zombie would say the same thing, so that assurance means nothing.


There are more than one theories, as well as some evidence that consciousness may not exist in the way we'd like to think.

It may be a trick our mind plays on us. The Global Workspace Theory addresses this, and some of the predictions this theory made have been supported by multiple experiments. If GWT is correct, it's very plausible, likely even, that an artificial intelligence could have the same type of consciousness.


That again requires assuming your own conclusion. Once again I have no way of knowing you are conscious. In order for any of this to not be nonsense I have to make a large number of assumptions including that you are conscious, that it is a physical process, that is an emergent process, and so on.

I am unwilling to accept any of the required assumptions because they are essentially based on faith.


Boltzmann brains and A. J. Ayer's "There is a thought now".

Ages ago, it occurred to me that the only thing that seemed to exist without needing a creator, was maths. That 2+2 was always 4, and it still would be even if there were not 4 things to count.

Basically, I independently arrived at similar conclusion as Max Tegmark, only simpler and without his level of rigour: https://benwheatley.github.io/blog/2018/08/26-08.28.24.html

(From the quotation's date stamp, 2007, I had only finished university 6 months earlier, so don't expect anything good).

But as you'll see from my final paragraph, I no longer take this idea seriously, because anything that leads to most minds being free to believe untruths, is cognitively unstable by the same argument that applies to Boltzmann brains.

MUH leads to Aleph-1 infinite number of brains*. I'd need a reason for the probability distribution over minds to be zero almost everywhere in order for it to avoid the cognitively instability argument.

* if there is a bigger infinity, then more; but I have only basic knowledge of transfinites and am unclear if the "bigger" ones I've heard about are considered "real" or more along the lines of "if there was an infinite sequence of infinities, then…"


Human minds are fairly free to believe untruths. At least to a certain extent: it's rather hard to _really_ believe things that contradict your lived experience.

You can _say_ that you believe them, but you won't behave as if you believe them.


Yeah, that's fair, I phrased that part wrong.

The problem with Boltzmann brains is that, by construction, they're going to have incorrect beliefs about almost everything.

Like, imagine watching a TV tuned to a dead station and somehow the random background noise looked and sounded like someone telling you the history of the world, and it really was just random noise doing this — that level of being wrong about almost everything.

Not even just errors like believing 1+1=3, but that this is equally likely as believing incoherent statements like 1+^Ω[fox emoji].


Yes, that makes sense.


> What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of

Iron and copper are both metals but only one can be hardened into steel

There is no reason why we should assume a silicon machine must have the same capabilities as a carbon machine


Unless you can show - even a single example would do - that we can compute a function that is outside the Turing computable set, then there is a very strong reason that we should assume a silicon machine has the same capabilities as a carbon machine to compute.


The problem is that your challenge is begging the question.

Computability or algorithms are the problem.

It is all the 'no effective algorithm exists for X' that is the problem.

Spike train retiming and issues with riddled basins in existing computers and math is an example if you drop compute a function


> There is no reason why we should assume a silicon machine must have the same capabilities as a carbon machine

Then make your computer out of carbon.

While the broader principle, that we don't know what we're doing and AI as it currently exists is a bit cargo-culty, this is a critique of the SOTA and is insufficient to be generalised: we can reasonably say "we probably have not", we can't say "we definitely cannot ever".

Who knows, perhaps our brains do somehow manage to do whacky quantum stuff despite seeming to be far too warm and messy for that. But even that is just an implementation detail.


> Who knows, perhaps our brains do somehow manage to do whacky quantum stuff despite seeming to be far too warm and messy for that. But even that is just an implementation detail.

Yes. And we are pretty close to building practical quantum computers. Though so far, we haven't really found much they would be good for. The most promising application seems to be for simulating quantum systems for material science.


Yeah, but bronze also makes great swords… what’s the point here?


> You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity

This is completely unrelated to the proof in the link. You have to clearly explain what reasoning in your argument for “AGI is impossible” also implies human intelligence is possible. You can’t just jump to conclusions “you sound human therefore intelligence is possible”


It's simple: Either your proof holds for NGI as much as for AGI, or neither, or you can clearly define what differentiates them that makes it work for one and not the other.


These are.. very weak rebuttals.


Agreed. I thought my followup qs were fair. I'd like to understand the argument, but the first response makes me think it's not worth wading too deeply in.


I think you’ve just successfully proven that general human intelligence indeed does not exist.


So, in a word: a) there is no ghost in the machine when the machine is a formal symbol-bound machine. And b) to be “G” there must be a ghost in the machine.

Is that a fair summary of your summary?

If so do you spend time on both a and b in your papers? Both are statements that seem to generate vigorous emotional debate.


> level of Artificiality

How do you define that? And why is this important?


Not the person asked, but in time honoured tradition I will venture forth that the key difference is billions of years of evolution. Innumerable blooms and culls. And a system that is vertically integrated to its core and self sustaining.


AI can be, and often are, trained by simulated evolution.


Simulated.


You have to say why you think that matters. It still culls the unfit.


I don’t. You have boiled a process of billions of years down to a single sentence. You should ponder your absurdity.


That's the point of language, to abstract a complex thing to what is often as little as a single word or sentence. It's not like the idea represented by the words "simulated evolution" is itself as simple as those two words anyway.

That it takes nature "billions of years" for natural evolution isn't even important here, because it's not like simulations have to run in real-time.

If you run simulated evolution with mechanical parts and the reward function of things that function like clocks, you get the (design of) a thing that functions like a clock, and if you run the physics simulation of the design, you can tell the time with it. Do it with electronics and things that act like a radio, you get a radio. Do it with a CAD design and the goal of strength for minimum mass, you end up with something that looks bone-like.

We also do it with AI, why should we expect it not to produce things in the general category of "minds"? Not necessarily human minds, even the biggest by parameter count are much smaller structures than our brains, but the general category.


I would argue that you are not a general intelligence. Humans have quite a specific intelligence. It might be the broadest, most general, among animal species, but it is not general. That manifests in that we each need to spend a significant amount of time training ourselves for specific areas of capability. You can't then switch instantly to another area without further training, even though all the context materials are available to you.


This seems like a meaningless distinction in context. When people say AGI, they clearly mean "effectively human intelligence". Not an infallible, completely deterministic, omniscient god-machine.


There's a great deal of space between effectively human and god machine. Effectively human meaning it takes 20 years to train it and then it's good at one thing and ok at some other things, if you're lucky. We expect more from LLMs right now, like being able to have very broad knowledge and be able to ingest vastly more context than a human can every time they're used. So we probably don't just think of or want a human intelligence.. or we want an instant specific one, and the process of being about to generate an instant specific one would surely be further down the line to your god like machine anyway.


The measure of human intelligence is never what humans are good at, but rather the capabilities of humans to figure out stuff they haven't before. Meaning, we can create and build new pathways inside our brains to perform and optimize tasks we have not done before. Practicing, then, reinforces these pathways. In a sense we do what we wish LLMs could - we use our intelligence to train ourselves.

It's a long (ish) process, but it's this process that actually composes human intelligence. I could take a random human right now and drop them somewhere they've never been before, and they will figure it out.

For example, you may be shocked to know that the human brain has no pathways for reading, as opposed to spoken language. We have to manually make those. We are, literally, modifying our brains when we learn new skills.


> For example, you may be shocked to know that the human brain has no pathways for reading, as opposed to spoken language.

I'm not shocked at all.

> I could take a random human right now and drop them somewhere they've never been before, and they will figure it out.

Yes, well not really. You could drop them anywhere in the human world, in their body. And even then, if you dropped me into a warehouse in China I'd have no idea what to do, I'd be culturally lost and unable to understand the language. And I'd want to go home. So yes you could drop in a human but they wouldn't then just perform work like an automonon. You couldn't drop their mind into a non human body and expect anything interesting to happen, and you certainly couldn't drop them anywhere inhospitable. Nearer to your example, you couldn't drop a football player into a maths convention and a maths professor into a football game and expect good results. The point of an AI is to be useful. I think AGI is very far away and maybe not even possible, whereas specific AIs are already abound.


It doesn't take 20 years for humans to train new tasks. Perhaps to master very complicated tasks, but there is many tasks you can certainly learn to do in a short amount of time. For example, "Take this hammer, and put nails in top 4 corners of this box, turn it around, do the same". You can master that relatively easy. An AGI ought to be able to practically all such tasks.

In any case, general intelligence merely means the capability to do so, not the amount of time it takes. I would certainly bet a physical theorist for example can learn to code in a matter of days despite never having been introduced to a computer before, because our intelligence is based on a very interconnected world model.


It takes about 10 years to train a human to do anything useful after creation.


A 4 year old can navigate the world better than any AI robot can


While I'm constantly disappointed by self driving cars, I do get the impression they're better at navigating the world than I was when I was four. And in public roads specifically, better than when I was fourteen.


Note that AGI (artificial general intelligence) and ASI (artificial superintelligence) are different thins

AGI reaches human level and ASI goes beyond that


The mathematical proof, as you describe it, sounds like the "No Free Lunch theorem". Humans also can't generalise to learning such things.

As you note in 2.1, there is widespread disagreement on what "AGI" means. I note that you list several definitions which are essentially "is human equivalent". As humans can be reduced to physics, and physics can be expressed as a computer program, obviously any such definition can be achieved by a sufficiently powerful computer.

For 3.1, you assert:

"""

Now, let's observe what happens when an Al system - equipped with state-of-the-art natural language processing, sentiment analysis, and social reasoning - attempts to navigate this question. The Al begins its analysis:

• Option 1: Truthful response based on biometric data → Calculates likely negative emotional impact → Adjusts for honesty parameter → But wait, what about relationship history? → Recalculating...

• Option 2: Diplomatic deflection → Analyzing 10,000 successful deflection patterns → But tone matters → Analyzing micro-expressions needed → But timing matters → But past conversations matter → Still calculating...

• Option 3: Affectionate redirect → Processing optimal sentiment → But what IS optimal here? The goal keeps shifting → Is it honesty? Harmony? Trust? → Parameters unstable → Still calculating...

• Option n: ....

Strange, isn't it? The Al hasn't crashed. It's still running. In fact, it's generating more and more nuanced analyses. Each additional factor may open ten new considerations. It's not getting closer to an answer - it's diverging.

"""

Which AI? ChatGPT just gives an answer. Your other supposed examples have similar issues in that it looks like you've *imagined* an AI rather than having tried asking an AI to seeing what it actually does or doesn't do.

I'm not reading 47 pages to check for other similar issues.


> physics can be expressed as a computer program

Citation needed. If you've spent any time dynamical systems, as an example, you'd know that the computer basically only kind of crudely estimates things, and only things that are abstractly near by. You may be able to write down some PDEs or field equations that may describe things at some base level, but even statistical mechanics, which is really what governs a huge amount of what we see and interact with, is just a pretty good approximation. Computers (especially real ones) only generate approximate (to some value of alpha) answers; physics is not reducible to a computer program at all.


> You may be able to write down some PDEs or field equations that may describe things at some base level, but even statistical mechanics, which is really what governs a huge amount of what we see and interact with, is just a pretty good approximation.

QED.

When the approximation is indistinguishable from observation over a time horizon exceeding a human lifetime, it's good enough for the purpose of "would a simulation of a human be intelligent by any definition that the real human also meets?"

Remember, this is claiming to be a mathematical proof, not a practical one, so we don't even have to bother with details like "a classical computer approximating to this degree and time horizon might collapse into a black hole if we tried to build it".


> Citation needed. If you've spent any time dynamical systems, as an example, you'd know that the computer basically only kind of crudely estimates things, and only things that are abstractly near by. You may be able to write down some PDEs or field equations that may describe things at some base level, but even statistical mechanics, which is really what governs a huge amount of what we see and interact with, is just a pretty good approximation. Computers (especially real ones) only generate approximate (to some value of alpha) answers; physics is not reducible to a computer program at all.

You're proving too much. The fact of the matter is that those crude estimations are routinely used to model systems.


1. I appreciate the comparison — but I’d argue this goes somewhat beyond the No Free Lunch theorem.

NFL says: no optimizer performs best across all domains. But the core of this paper doesnt talk about performance variability, it’s about structural inaccessibility. Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

2. OMG, lool. ... just to clarify, there’s been a major misunderstanding :)

the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

So - NOT a real thread, - NOT a real dialogue with my wife... - just an exemplary case... - No, I am not brain dead and/or categorically suicidal!! - And just to be clear: I dont write this while sitting in some marital counseling appointment, or in my lawyer's office, the ER, or in a coroners drawer

--> It’s a stylized, composite example of a class of decision contexts that resist algorithmic resolution — where tone, timing, prior context, and social nuance create an uncomputably divergent response space.

Again : No spouse was harmed in the making of that example.

;-))))


Just a layman here so Im not sure if Im understanding (probably not), but humans dont analyze every possible scenario ad infinitum, we go based on the accumulation of our positive/negative experiences from the past. We make decisions based on some self construed goal and beliefs as to what goes towards those goals, and these are arbitrary with no truth. Napolean for example conquered Europe perhaps simiply becuause he thought he was the best to rule it, not through a long chain of questions and self doubt

We are generally intelligent only in the sense that our reasoning/modeling capabilities allow us to understand anything that happens in space-time.


> Specifically, that some semanti spaces (e.g., heavy-tailed, frame-unstable, undecidable contexts) can’t be computed or resolved by any algorithmic policy — no matter how clever or powerful. The model does not underperform here, the point is that the problem itself collapses the computational frame.

I see no proof this doesn’t apply to people


> the “weight-question”-Part is NOT a transcript from my actual life... thankfully - I did not transcribe a live ChatGPT consult while navigating emotional landmines with my (perfectly slim) wife, then submit it to PhilPapers and now here…

You have wildly missed my point.

You do not need to even have a spouse in order to try asking an AI the same question. I am not married, and I was still able to ask it ask it to respond to that question.

My point is that you clearly have not asked ChatGPT, because ChatGPT's behaviour clearly contradicts your claims about what AI would do.

So: what caused you to write to claim that AI would respond as you say they would respond, when the most well-known current generation model clearly doesn't?


I read some of the paper, and it does seem silly to me to state this:

"But here’s the peculiar thing: Humans navigate this question daily. Not always successfully, but they do respond. They don’t freeze. They don’t calculate forever. Even stranger: Ask a husband who’s successfully navigated this question how he did it, and he’ll likely say: ‘I don’t know… I just… knew what to say in that moment....What’s going on here? Why can a human produce an answer (however imperfect) while our sophisticated AI is trapped in an infinite loop of analysis?” ’"

LLM's don't freeze either. In your science example too, we already have LLMs that give you very good answers to technical questions, so on what grounds is this infinite cascading search based on?

I have no idea what you're saying here either: "Why can’t the AI make Einstein’s leap? Watch carefully: • In the AI’s symbol set Σ, time is defined as ‘what clocks measure-universally’ • To think ‘relative time,’ you first need a concept of time that says: • ‘flow of time varies when moving, although the clock ticks just the same as when not moving' • ‘Relative time’ is literally unspeakable in its language • "What if time is just another variable?", means: :" What if time is not time?"

"AI’s symbol set Σ, time is defined as ‘what clocks measure-universally", it is? I don't think this is accurate of LLM's even, let alone any hypothetical AGI. Moreover LLM's clearly understand what "relative" means, so why would they not understand "relative time?".

In my hypothetical AGI, "time" would mean something like "When I observe something, and then things happens in between, and then I observe it again", and relative time would mean something like "How I measure how many things happen in between two things, is different from how you measure how many things happen between two things"


> As humans can be reduced to physics, and physics can be expressed as a computer program

This is an assumption that many physicists disagree with. Roger Penrose, for example.


That's true, but we should acknowledge that this question is generally regarded as unsettled.

If you accept the conclusion that AGI (as defined in the paper, that is, "solving [...] problems at a level of quality that is at least equivalent to the respective human capabilities") is impossible but human intelligence is possible, then you must accept that the question is settled in favor of Penrose. That's obviously beyond the realm of mathematics.

In other words, the paper can only mathematically prove that AGI is impossible under some assumptions about physics that have nothing to do with mathematics.


> then you must accept that the question is settled in favor of Penrose. That's obviously beyond the realm of mathematics.

Not necessarily. You are assuming (AFAICT) that we 1. have perfect knowledge of physics and 2. have perfect knowledge of how humans map to physics. I don't believe either of those is true though. Particularly 1 appears to be very obviously false, otherwise what are all those theoretical physicists even doing?

I think what the paper is showing is better characterized as a mathematical proof about a particular algorithm (or perhaps class of algorithms). It's similar to proving that the halting problem is unsolvable under some (at least seemingly) reasonable set of assumptions but then you turn around and someone has a heuristic that works quite well most of the time.


Where am I assuming that we have perfect knowledge of physics?

To make it plain, I'll break the argument in two parts:

(a) if AGI is impossible but humans are intelligent, then it must be the case that human behavior can't be explained algorithmically (that last part is Penrose's position).

(b) the statement that human behavior can't be explained algorithmically is about physics, not mathematics.

I hope it's clear that neither (a) or (b) require perfect knowledge of physics, but just in case:

(a) is true by reductio ad absurdum: if human behavior can be explained algorithmically, then an algorithm must be able to simulate it, and so AGI is possible.

(b) is true because humans exist in nature, and physics (not mathematics) is the science that deals with nature.

So where is the assumption that we have perfect knowledge of physics?


You didn't. I confused something but looking at the comment chain now I can't figure out what. I'd say we're actually in perfect agreement.


Penrose’s views on consciousness is largely considered quackery by other physicists.


Nobody should care what ANY physicists say about consciousness.

I mean seriously, what? I don't go asking my car mechanic about which solvents are best for extracting a polar molecule, or asking my software developer about psychology.


Yet somehow quantum woo is constantly evoked to explain consciousness.


"Many" is doing a lot of work here.


“This paper presents a theoretical proof that AGI systems will structurally collapse under certain semantic conditions…”

No it doesn’t.

Shannon entropy measures statistical uncertainty in data. It says nothing about whether an agent can invent new conceptual frames. Equating “frame changes” with rising entropy is a metaphor, not a theorem, so it doesn’t even make sense as a mathematical proof.

This is philosophical musing at best.


Correct: Shannon entropy originally measures statistical uncertainty over a fixed symbol space. When the system is fed additional information/data, then entropy goes down, uncertainty falls. This is always true in situations where the possible outcomes are a) sufficiently limited and b)unequally distributed. In such cases, with enough input, the system can collapse the uncertainty function within a finite number of steps.

But the paper doesn’t just restate Shannon.

It extends this very formalism to semantic spaces where the symbol set itself becomes unstable. These situations arise when (a) entropy is calculated across interpretive layers (as in LLMs), and (b) the probability distribution follows a heavy-tailed regime (α ≤ 1). Under these conditions, entropy divergence becomes mathematically provable.

This is far from being metaphorical: it’s backed by formal Coq-style proofs (see Appendix C in he paper).

AND: it is exactly the mechanism that can explain the Apple-Papers' results


Your paper only claims that those Coq snippets constitute a "constructive proof sketch". Have those formalizations actually been verified, and if so, why not include the results in the paper?

Separately from that, your entire argument wrt Shannon hinges on this notion that it is applicable to "semantic spaces", but it is not clear on what basis this jump is made.


This sounds like a good argument why making the optimal decisions in every single case is undecidable, but not why an AGI should be unable to exist.


Unless you can prove that humans exceed the Turing computable, the headline is nonsense unless you can also show that the Church-Turing thesis isn't true.

Since you don't even appear to have dealt with this, there is no reason to consider the rest of the paper.


> In plain language:

> No matter how sophisticated, the system MUST fail on some inputs.

Well, no person is immune to propaganda and stupididty, so I don't see it as a huge issue.


I have no idea how you believe this relates to the comment you replied to.


If I'm understanding correctly, they are arguing that the paper only requires that an intelligent system will fail for some inputs and suggest that things like propaganda are inputs for which the human intelligent system fails. Therefore, they are suggesting that the human intelligent system does not necessarily refute the paper's argument.


If so, then the papers argument isn't actually trying to prove that AGI is impossible, despite the title, and the entire discussion is pointless.


But what then is the relevance of the study?


I suppose it disproves embodied, fully meat-space god if sound?


I'm looking at the title again and it seems wrong, because AGI ~ human intelligence. Unless human intelligence has non physical components to it.


I think OP answered the question here:

https://news.ycombinator.com/item?id=44349516


No, he didn't.

It's not even close to addressing the argument.


could you explain for a layman


I'm not sure if this will help, but happy to elaborate further:

The set of Turing computable functions is computationally equivalent to the lambda calculus, is computationally equivalent to the generally recursive functions. You don't need to understand those terms, only to know that these functions define the set of functions we believe to include all computable functions. (There are functions that we know to not be computable, such as e.g. a general solution to the halting problem)

That is, we don't know of any possible way of defining a function that can be computed that isn't in those sets.

This is basically the Church-Turing thesis: That a function on the natural numbers can be effectively computable (note: this has a very specific meaning, it's not about performance) only if it is computable by a Turing machine.

Now, any Turing machine can simulate any other Turing machine. Possibly in a crazy amount of time, but still.

The brain is at least a Turing machine in terms of computabilitity if we treat "IO" (speaking, hearing, for example) as the "tape" (the medium of storage in the original description of the Turing machine). We can prove this, since the smallest Turing machine is a trivial machine with 2 states and 3 symbols that any moderate functional human is capable of "executing" with pen and paper.

(As an aside: It's almost hard to construct a useful computational system that isn't Turing complete; "accidental Turing completeness" regularly happens, because it is very trivial to end up with a Turing complete system)

An LLM with a loop around it and temperature set to 0 can trivially be shown to be able to execute the same steps, using context as input and the next token as output to simulate the tape, and so such a system is Turing complete as well.

(Note: In both cases, this could require a program, but since for any Turing machine of a given size we can "embed" parts of the program by constructing a more complex Turing machine with more symbols or states that encode some of the actions of the program, such a program can inherently be embedded in the machine itself by constructing a complex enough Turing machine)

Assuming we use a definition of intelligence that a human will meet, then because all Turing machines can simulate each other, then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.

If we can't then "worst case" AGI can be constructed by simulating every computational step of the human brain.

Any other argument about the impossibility of AGI inherently needs to contain a something that disproves the Church-Turing thesis.

As such, it's a massive red flag when someone claims to have a proof that AGI isn't possible, but haven't even mentioned the Church-Turing thesis.


> then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.

I would reframe: the only way of showing that artificial intelligence can be constructed is by showing that humans cannot compute more than the Turing computable.

Given that Turing computable functions are a vanishingly small subset of all functions, I would posit that that is a rather large hurdle to meet. Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.


Given that we know of no computable function that isn't Turing computable, and the set of Turing computable functions is known to be equivalent to the lambda calculus and equivalent to the set of general recursive functions, what is an immensely large hurdle would be to show even a single example of a computable function that is not Turing computable.

If you can do so, you'd have proven Turing, Kleen, Church, Goedel wrong, and disproven the Church-Turing thesis.

No such example is known to exist, and no such function is thought to be possible.

> Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.

1/3 symbolically represents an infinite process. The notion that a finite alphabet can't describe inifity is trivially flawed.


Function != Computable Function / general recursive function.

That's my point - computable functions are a [vanishingly] small subset of all functions.

For example (and close to our hearts!), the Halting Problem. There is a function from valid programs to halt/not-halt. This is clearly a function, as it has a well defined domain and co-domain, and produces the same output for the same input. However it is not computable!

For sure a finite alphabet can describe an infinity as you show - but not all infinity. For example almost all Real numbers cannot be defined/described with a finite string in a finite alphabet (they can of course be defined with countably infinite strings in a finite alphabet).


Non-computable functions are not relevant to this discussion, though, because humans can't compute them either, and so inherently an AGI need not be able to compute them.

The point remains that we know of no function that is computable to humans that is not in the Turing computable / general recursive function / lambda calculus set, and absent any indication that any such function is even possible, much less an example, it is no more reasonable to believe humans exceed the Turing computable than that we're surrounded by invisible pink unicorns, and the evidence would need to be equally extraordinary for there to be any reason to entertain the idea.


Humans do a lot of stuff that is hard to 'functionalise', computable or otherwise, so I'd say the burden of proof is on you. What's the function for creating a work of art? Or driving a car?


You clearly don't understand what a function means in this context, as the word function is not used in this thread in the way you appear to think it is used.

For starters, to have any hope of having a productive discussion on this subject, you need to understand what "function" mean in the context of the Church-Turing thesis (a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine -- note that not just "function" has a very specific meaning there, but also "effective method" does not mean what you're likely to read into it).


My original reframing was: the only way of showing that artificial intelligence can be constructed is by showing that humans cannot compute more than the Turing computable.

I was assuming the word 'compute' to have broader meaning than Turing computable - otherwise that statement is a tautology of course.

I pointed out that Turing computable functions are a (vanishingly) small subset of all possible functions - of which some may be 'computable' outside of Turing machines even if they are not Turing computable.

An example might be the three-body problem, which has no general closed-form solution, meaning there is no equation that always solves it. However our solar system seems to be computing the positions of the planets just fine.

Could it be that human sapience exists largely or wholly in that space beyond Turing computability? (by Church-Turing thesis the same as computable by effective method, as you point out). In which case your AGI project as currently conceived is doomed.


You don't need a closed-form solution to calculate trajectories with more precision than you can prove the universe uses.


I mean AI can already do those things


Compute functions != Intelligence though.

For example learning from experience (which LLMs cannot do because they cannot experience anything and they cannot learn) is clearly an attribute of an intelligent machine.

LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.


> Compute functions != Intelligence though.

If that is true, you have a proof that the Church-Turing thesis is false.

> LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.

For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.

Which again, would boil down to proving the Church-Turing thesis wrong.


>If that is true, you have a proof that the Church-Turing thesis is false.

We're talking the physical version right? I don't have any counter examples that I can describe, but I could hold that that's because human language, perception and cognition cannot capture the mechanisms that are necessary to produce them.

But I won't as that's cheating.

Instead I would say that although I can't disprove PCT it's not proven either, and unlike other unproven things like P!=NP this is about physical systems. Some people think that all of physical reality is discrete (quantized), if they are right then PCT could be true. However, I don't think this is so as I think that it means that you have to consider time as unreal, and I think that's basically as crazy as denying consciousness and free will. I know that a lot of physicists are very clever, but those of them that have lost the sense to differentiate between a system for describing parts of the universe and a system that defines the workings of the universe as we cannot comprehend it are not good at parties in my experience.

>For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.

I dunno what you mean by "relevant" here - you seem to be denying that there is a difference between reality and unreality? Like a Super Cartesian idea where you say that not only is the mind separate from the body but that the existence of bodies or indeed the universe that they are instantiated in is irrelevant and doesn't matter?

Wild. Kinda fun, but wild.

I stand by my point though, computing functions about how molecules interact with each other and lead to the propagation of signals along neural pathways to generate qualia is only the same as tasting beer if the qualia are real. I don't see that there is any account of how computation can create a feeling of reality or what it is like to. At some point you have to hit the bottom and actually have an experience.


I think that may depend on how someone defines intelligence. For example, if intelligence includes the ability to feel emotion or appreciate art, then I think it becomes much more plausible that intelligence is not the same as computation.

Of course, simply stating that isn't in of itself a philisophically rigorous argument. However, given that not everyone has training in philosophy and it may not even be possible to prove whether "feeling emotion" can be achieved via computation, I think it's a reasonable argument.


I think if they define intelligence that way, it isn't a very interesting discussion, because we're back to Church-Turing: Either they can show that this actually has an effect on the ability to reason and the possible outputs of the system that somehow exceeds the Turing computable, or those aspects are irrelevant to an outside observer of said entity because the entity would still be able to act in exactly the same way.

I can't prove that you have a subjective experience of feeling emotion, and you can't prove that I do - we can only determine that either one of us acts as if we do.

And so this is all rather orthogonal to how we define intelligence, as whether or not a simulation can simulate such aspects as "actual" feeling is only relevant if the Church-Turing thesis is proven wrong.


There are lots and lots of things that we can't personally observe about the universe. For example, it's quite possible that everyone in New York is holding their breath at the moment. I can't prove that either way, or determine anything about that but I accept the reports of others that no mass breath holding event is underway... and I live my life accordingly.

On the other hand many people seem unwilling to accept the reports of others that they are conscious and have freedom of will and freedom to act. At the same time these people do not live as if others were not conscious and bereft of free will. They do not watch other people murdering their children and state "well they had no choice". No they demand that the murderers are punished for their terrible choice. They build systems of intervention to prevent some choices and promote others.

It's not orthogonal, it's the motivating force for our actions and changes our universe. It's the heart of the matter, and although it's easy to look away and focus on other parts of the problems of intelligence at some point we have to turn and face it.


Church-Turing doesn't touch upon intelligence nor consciousness. It talks about "effective procedures". It claims that every effectively computable thing is Turing computable. And effective procedures are such that "Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed."

Church-Turing explicitly doesn't touch upon ingenuity. It's very well compatible with Church-Turing that humans are capable of some weird decision making that is not modelable with the Turing machine.


> > Compute functions != Intelligence though.

> If that is true, you have a proof that the Church-Turing thesis is false.

Well, depends on how. I think being able to compute (arbitrary) functions is much more than is necessary for intelligence.


What program would a Turing machine run to spontaneously prove the incompleteness theorem?

Can you prove such a program may exist?


Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.

If the Church-Turing thesis can be proven false, conversely, then it may be possible that such a program can't exist - it is a necessary but not sufficient condition for the Church-Turing thesis to be false.

Given we have no evidence to suggest the Church-Turing thesis to be false, or for it to be possible for it to be false, the burden falls on those making the utterly extraordinary claim that they can't exist to actually provide evidence for those claims.

Can you prove the Church-Turing thesis false? Or even give a suggestion of what a function that might be computable but not Turing computable would look like?

Keep in mind that explaining how to compute a function step by step would need to contain at least one step that can't be explain in a way that allows the step to be computable by a Turing machine, or the explanation itself would instantly disprove your claim.

The very notion is so extraordinary as to require truly extraordinary proof and there is none.

A single example of a function that is not Turing computable that human intelligence can compute should be low burden if we can exceed the Turing computable.

Where are the examples?


> Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.

Doesn't that assume that the brain is a Turing machine or equivalent to one? My understanding is that the exact nature of the brain and how it relates to the mind is still an open question.


That is exactly the point.

If the Church-Turing thesis is true, then the brain is a Turing machine / Turing equivalent.

And so, assuming Church-Turing is true, then the existence of the brain is proof of the possibility of AGI, because any Turing machine can simulate any other Turing machine (possibly too slowly to be practical, but it denies its impossibility).

And so, any proof that AGI is "mathematically impossible" as the title claims, is inherently going to contain within it a proof that the Church-Turing thesis is false.

In which case there should be at least one example of a function a human brain can compute that a Turing machine can't.


> If the Church-Turing thesis is true, then the brain is a Turing machine / Turing equivalent

This is simply technically not true. You can look up the https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis and it does not talk about brains, or intelligence.



Given what I see in these discussions, I suspect your use of the word "spontaneously" is a critical issue for you, but also not for me.

None of us exist in a vacuum*, we all react to things around us, and this is how we come to ask questions such as those that led Gödel to the incompleteness theorems.

On the other hand, for "can a program prove it?", this might? I don't know enough Lean (or this level of formal mathematics) myself to tell if this is a complete proof or a WIP: https://github.com/FormalizedFormalLogic/Incompleteness/blob...

* unless we're Boltzmann brains, in which case we have probably hallucinated the existence of the question in addition to all evidence leading to our answer


An accurate-enough physical simulation of Kurt Gödel's brain.

Such a program may exist- unless you think such a simulation of a physical system is uncomputable, or that there is some non-physical process going on in that brain.


I'm wondering if you may have rediscovered the concept of "Wicked Problems", which have been studied in system analysis and sociology since the 1970's (I'd cite the Wikipedia page, but I've never been particularly fond of Wikipedia's write up on them). They may be worth reading up on if you're not familiar with them.


It's interesting. The question from the paper "Darling, please be honest: have I gained weight?" assumes that the "socially acceptability" of the answer should be taken into account. In this case the problem fits the "Wickedness" (Wikipedia's quote is "Classic examples of wicked problems include economic, environmental, and political issues"). But taken formally, and with the ability for LLM to ask questions in return to decrease formal uncertainty ("Please, give me several full photos of yourself from the past year to evaluate"), it is not "wicked" at all. This example alone makes the topic very uncertain in itself


Wow, that is a great advice. Never heard of them - and they seem to fit perfectly into the whole concept THANK YOU! :-)


In your paper it states:

AGI as commonly defined

However I don’t see where you go on to give a formalization of “AGI” or what the common definition is.

can you do that in a mathematically rigorous way such that it’s a testable hypothesis?


I don't think it exists. We can't even seem to agree on a standard criteria for "intelligence" when assessing humans let alone a rigorous mathematical definition. In turn, my understanding of the commonly accepted definition for AGI (as opposed to AI or ML) has always been "vaguely human or better".

Unless the marketing department is involved in which case all bets are off.


It can exist for the purpose of the paper. As in "when I write AGI, I mean ...". Otherwise what's the point in any rigour if we're just going by "you know what I mean" vibes.


Apple's paper sets up a bit of a straw man in my opinion. It's unreasonable to expect that an LLM not trained on what are essentially complex algorithmic tasks is just going to discover the solution on the spot. Most people can solve simple cases of the tower of Hanoi, and almost none of us can solve complex cases. In general, the ones who can have trained to be able to do so.


> specific problem classes (those with α ≤ 1),

For the layman, what does α mean here?


I'm sure this is a reference to alpha stable distributions: https://en.m.wikipedia.org/wiki/Stable_distribution

Most of these don't have finite moments and are hard to do inference on with standard statistical tools. Nassim Taleb's work (Black Swan, etc.) is around these distributions.

But I think the argument of OP in this section doesn't hold.


does this include if the AI can devise new components and use drones and things essentially to build a new iteration of itself more capable to compute a thing and keep repeating this going out into the universe as needed for resources and using von Neumann probes.. etc?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: