[Paper] Key Search Might Explain Neural Network Training

April 30, 2026 · Jurij Jukić

[Video] Brief History of Complexity; Logical Depth and Neural Networks

I go through the history of complexity: entropy, Kolmogorov complexity, Levin complexity, minimum description length, epiplexity, logical depth, and multiscale logical depth. I compare program length, runtime, and precision aspects of these theories. I relate them to neural network training dynamics and conjecture that logical depth is the most useful one.

April 24, 2026 · Jurij Jukić

[Post] Logical Depth as a Framework for Understanding Neural Networks

In this post I briefly introduce how we can use Charles Bennett’s logical depth as a general framework to understand neural networks. I find it incredibly rich, and one can apply it in many interesting ways when thinking about the training pipeline and interpretability. Intro - entropy and Kolmogorov complexity If one were to try to mathematically describe a neural network, neither entropy nor Kolmogorov complexity seem sufficient. Entropy could be described as a measure of average surprise. Order is, on average, very unsurprising, because we can easily predict where the particles are located (assuming a physics metaphor). Disorder is on average surprising, because we can never really predict where the next particle will show up. Neither end of the spectrum of entropy, order or disorder, seems to capture the intelligence that is contained within a neural network. ...

March 20, 2026 · Jurij Jukić