<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Posts on </title>
    <link>https://jurij-jukic.github.io/posts/</link>
    <description>Recent content in Posts on </description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 30 Apr 2026 00:00:00 +0200</lastBuildDate>
    <atom:link href="https://jurij-jukic.github.io/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>[Paper] Key Search Might Explain Neural Network Training</title>
      <link>https://jurij-jukic.github.io/posts/003-key-search-paper/</link>
      <pubDate>Thu, 30 Apr 2026 00:00:00 +0200</pubDate>
      <guid>https://jurij-jukic.github.io/posts/003-key-search-paper/</guid>
      <description></description>
    </item>
    <item>
      <title>[Video] Brief History of Complexity; Logical Depth and Neural Networks</title>
      <link>https://jurij-jukic.github.io/posts/002-logical-depth-video/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0200</pubDate>
      <guid>https://jurij-jukic.github.io/posts/002-logical-depth-video/</guid>
      <description>&lt;p&gt;I go through the history of complexity: entropy, Kolmogorov complexity, Levin complexity, minimum description length, epiplexity, logical depth, and multiscale logical depth. I compare program length, runtime, and precision aspects of these theories. I relate them to neural network training dynamics and conjecture that logical depth is the most useful one.&lt;/p&gt;
&lt;div style=&#34;position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;&#34;&gt;
      &lt;iframe allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen&#34; loading=&#34;eager&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; src=&#34;https://www.youtube.com/embed/boO9bT-yhco?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0&#34; style=&#34;position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;&#34; title=&#34;YouTube video&#34;&gt;&lt;/iframe&gt;
    &lt;/div&gt;</description>
    </item>
    <item>
      <title>[Post] Logical Depth as a Framework for Understanding Neural Networks</title>
      <link>https://jurij-jukic.github.io/posts/001-logical-depth-neural-networks/</link>
      <pubDate>Fri, 20 Mar 2026 00:00:00 +0100</pubDate>
      <guid>https://jurij-jukic.github.io/posts/001-logical-depth-neural-networks/</guid>
      <description>&lt;p&gt;In this post I briefly introduce how we can use Charles Bennett&amp;rsquo;s logical depth as a general framework to understand neural networks. I find it incredibly rich, and one can apply it in many interesting ways when thinking about the training pipeline and interpretability.&lt;/p&gt;
&lt;h2 id=&#34;intro---entropy-and-kolmogorov-complexity&#34;&gt;&lt;strong&gt;Intro - entropy and Kolmogorov complexity&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;If one were to try to mathematically describe a neural network, neither entropy nor Kolmogorov complexity seem sufficient.&lt;/p&gt;
&lt;p&gt;Entropy could be described as a measure of average surprise. Order is, on average, very unsurprising, because we can easily predict where the particles are located (assuming a physics metaphor). Disorder is on average surprising, because we can never really predict where the next particle will show up. Neither end of the spectrum of entropy, order or disorder, seems to capture the intelligence that is contained within a neural network.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
