We are entering a time in which agentic AIs can learn from their mistakes and correct themselves. For this to work, they require a steady supply of _fresh_ data, so they do not depend on a monoculture of AI-generated pseudo-knowledge. AIs excel at making connections and predictions when high-quality data is available. The creation of genuinely new knowledge, however, usually requires external input. If most of the available data originates from other AIs—or from the AI itself—the result is a form of inbreeding that produces poor outcomes.
In software engineering, by contrast, there is a clear metric for success or failure, especially when development follows a test-driven workflow with proper requirements analysis and verification & validation. Code either compiles, or it does not; tests either fail, or they succeed. On this basis, an agentic AI can reliably correct code until it compiles, and with some guidance the AI will also fix failing tests—or the code that caused them to fail. To the best of my knowledge, however, such failures are not yet used for re-training.
Other knowledge domains, being more open to interpretation than software engineering, lack such straightforward correction mechanisms. Ensuring that an AI does not learn solely from an echo chamber is therefore of the utmost importance. If this simple principle is respected, AI can be your ally, not your adversary.
Contact me if you need advice on these topics.
Have fun, Rüdiger
