Learning Dynamics over Frozen LLMs
The same IBF engine instantiated on a frozen Mistral-7B. Knowledge injection, retention through continued learning, and selective revision under contradiction, without modifying base model parameters.
The Thinker Initiative is a Romanian program in fundamental artificial intelligence research, uniting experts from the Informational Buildup Foundation, the Simion Stoilow Institute of Mathematics of the Romanian Academy, and the Institute of Logic and Data Science. Researchers across formal logic, algebra, category and institution theory, machine learning, and stochastic processes. Entrepreneurial driven.
To research and build AI models that are safer, more reliable, and more efficient than the current paradigm. To build The Thinker.
Hallucination, catastrophic forgetting, fragile generalization, the persistent inability to make alignment intrinsic. The field treats these as separate problems requiring separate fixes. They are not. They are symptoms of a single pathology: the substrate.
In dominant architectures, knowledge lives as global parameter superposition. The same weights must simultaneously encode new information, preserve old structure, and govern conflicting demands. Update one concept and the entire configuration shifts. Destructive interference is not an accident. It is a mathematical inevitability.
Scaling postpones the collapse. Regularization softens it. Replay patches the amnesia. None of them cures the underlying physics. The challenge is not how to patch the architecture. It is how to replace the substrate.
To build the Thinker, we are developing a new computational substrate: the Informational Buildup Framework.
IBF starts from a single premise: information is not data. It is the achievement of structural alignment between a system's internal configuration and the structure of its environment. From this premise, a generative framework is born where memory, agency, and self-correction are not engineered as separate modules but arise as derived consequences of the governing dynamics. Corrections are spatially localized. Knowledge that proves consistent across contexts crystallizes. Knowledge that is contradicted dissolves. What remains is structure that earned its place.
Introduces the Informational Buildup Framework and validates its continual learning dynamics across three domains: a controlled non-stationary environment, chess under independent Stockfish evaluation, and Split-CIFAR-100 across 20 sequential tasks. In all three, IBF achieves replay-superior retention without storing raw data.
The same IBF engine instantiated on a frozen Mistral-7B. Knowledge injection, retention through continued learning, and selective revision under contradiction, without modifying base model parameters.
Removing explicit context transitions. Memories crystallize, fall silent, reactivate, and dissolve under rolling discrepancy statistics alone.
Multiple IBF agents in a shared world where each agent's discrepancy structure is partly generated by the others. Testing whether coordination emerges without an externally imposed objective.
The complete axiomatic, postulatory, and proof-level structure. The first paper states only the minimal formal machinery needed for the continual learning arc. The full theory is substantially larger.
Radu Negulescu · radu@ibf.xyz