While trying to fall asleep, I spent a fun half hour pondering whether reasoning AIs are Turing-complete. In a nutshell, they would be if they supported conditional branching (if–then–else), arbitrary loops (while–do), and unbounded memory. That would allow them to express and perform any computation a general-purpose computer can. For example, it would become possible to express an AI inside an AI and let it run there.
At first glance, the answer seems obvious: of course you can express conditions in an AI. For instance take “Let x be 3. If x is larger than 2, then say `larger`, else say `smaller or equal`”. ChatGPT-5 gives the correct answer, `larger`, albeit in a rather verbose way. Similarly, “Let x be 1. While x is smaller than 4: increment x by 1 and print x.” produces the correct sequence (again, with some unnecessary chatter).
Looking closer, though, one has to conclude that for conditional branching to work predictably, the condition must always return the same result. Unfortunately, that is not the case. An AI may happily give you 20 different answers to the same question when you expected just one. Some answers may include hallucinations. Others may be influenced by assumptions based on previous exchanges. So when running conditions more complex than “> 2,” you risk different, hence unpredictable, outcomes. The same applies to conditional loops.
The final nail in the coffin is memory. An AI does not have unlimited memory: when the context gets too large, it will forget. In fact, context windows are the single most limiting factor I have encountered in extensive work with AIs. As a side note, unconditional (endless) loops eventually fill up the context window and push out older information. Forget about implementing an event loop inside your AI GUI …
So, in summary, the disappointing answer is: No, AIs are not Turing-complete. (As if you hadn’t guessed—just try something like “Fix my code until all tests pass without errors.”)
Can we do better? To some extent, yes—but only with the help of external tools. E.g., nothing prevents you from embedding small AI agent-runs inside a script which, in the example above, checks for failing tests and instructs the AI to fix them one by one. Tasks must be small enough to fit within the fixed-size context window, but as long as that holds, the lack of unlimited memory doesn’t matter.
In short: break your problems into smaller chunks, solve them individually (divide and conquer), and add quality gates to minimize hallucinations.
This does not make the AI–scripting combo Turing-complete—but it goes a long way toward producing better results.
Have fun,
Rüdiger