<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="https://www.frootlab.org/feed/science.xml" rel="self" type="application/atom+xml" /><link href="https://www.frootlab.org/" rel="alternate" type="text/html" /><updated>2022-01-18T11:03:59+00:00</updated><id>https://www.frootlab.org/feed/science.xml</id><title type="html">Frootlab | Science</title><subtitle>Learn more about automated collaborative data science at the homepage and corporate blog of the Frootlab Organization and the Vivid Code framework</subtitle><entry><title type="html">AI Revolution: Deep Learning and what´s next?</title><link href="https://www.frootlab.org/blog/science/19120-a-review-on-deep-learning.html" rel="alternate" type="text/html" title="AI Revolution: Deep Learning and what´s next?" /><published>2019-04-30T00:00:00+00:00</published><updated>2019-04-30T00:00:00+00:00</updated><id>https://www.frootlab.org/blog/science/a-review-on-deep-learning</id><content type="html" xml:base="https://www.frootlab.org/blog/science/19120-a-review-on-deep-learning.html">&lt;p&gt;&lt;strong&gt;When talking about the “AI Revolution” it’s difficult to narrow down a common
denominator. This is not only because science fiction didn’t prepare us for our
first real encounters with AI, but also due to the many and varied accessions,
ranging from hopes to fears.&lt;/strong&gt;&lt;/p&gt;

&lt;!--more--&gt;

&lt;p&gt;The AI Revolution is nothing more and nothing less then a rite of passage. But
to know, where this journey takes us, requires to know where it started. After
about one decade of deep learning it’s time to take stock of progress and review
some of the most important milestones and remaining challenges.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/images/posts/AI-Revolution.png&quot;&gt;&lt;img src=&quot;/images/posts/AI-Revolution.png&quot; alt=&quot;AI Revolution&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;big-bang&quot;&gt;Big-Bang!&lt;/h2&gt;

&lt;p&gt;The advent of deep learning can be traced back to Geoffrey Hinton’s  daredevil
science article &lt;a href=&quot;https://www.cs.toronto.edu/~hinton/science.pdf&quot; target=&quot;_blank&quot;&gt;“Reducing the dimensionality of data with neural
networks”&lt;/a&gt;
(Hinton et al. 2006). It’s contents may kindly be summarized in two essential
observations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Certain undirected graphical models, termed Restricted Boltzmann Machines
(RBM), can efficiently&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; be trained to represent data by maximizing their
likelihood.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;These RBMs can be stacked together to “pre-train” deep Artificial Neural
Networks (ANN), which in a subsequent “fine-tuning” step generally attain much
better solutions then without pre-training.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These points are not as bloodless as they appear: Essentially, they mean that
almost anything&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; can be predicted with appropriate data! Data scientists
usually do not have the reputation of being exuberant, but some tears of joy
must have been shed with this discovery!&lt;/p&gt;

&lt;h2 id=&quot;the-bayesian-dbm&quot;&gt;The Bayesian: DBM&lt;/h2&gt;

&lt;p&gt;Although Hinton’s article definitively paved the way for deep ANNs, it did not
yield an explanation for the use of pre-training, nor did it provide a
mathematical framework to describe it. Undeterred of these shortcomings, a group
about Guillaume Desjardins greatly improved Hinton’s approach by welding the
stack of RBMs into a single Deep Boltzmann Machine (DBM).&lt;/p&gt;

&lt;p&gt;Their article &lt;a href=&quot;https://arxiv.org/abs/1203.4416&quot; target=&quot;_blank&quot;&gt;“On Training Deep Boltzmann
Machines”&lt;/a&gt; (Desjardins,
Courville, Bengio 2012) provides a gradient based update rule for the
simultaneous effective&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; training of stacked RBMs and therefore avoids losses,
caused by their stacking. Thereby the stacked RBMs - the DBM - is trained to
generate a latent representation of the training data, by preserving it’s
dependency structure. This strategy endows it with high generalizability.&lt;/p&gt;

&lt;p&gt;Apart of these improvements, however, DBMs provide an important hint about the
very nature of pre-training: DBMs generate the sample distribution of the
training data by maximizing the likelihood. This can be imagined as the
inflation of a manifold that clings to the data in terms of a total least
squares regression. Without pre-training, however, the ANNs only perform an
ordinary least squares regression, which heavily impairs their generalizability.&lt;/p&gt;

&lt;h2 id=&quot;the-frequentist-gan&quot;&gt;The Frequentist: GAN&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;Adversarial training is the coolest thing since sliced bread&lt;/p&gt;

  &lt;p&gt;Yann LeCun&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A further obscurity in Hinton’s article was the succession of an undirected
graphical model, followed by a directed - and indeed that’s the daredevil part!
Superficially the parameter spaces of both models may somehow seem comparably,
but they are not at all! In particular with respect to the different probability
distributions, they generate. But how to solve this problem? Imagine two kids
sharing toys: No wonder they always quarrel! A group about Ian Goodfellow
provided a fairly straight solution: Every model get’s it’s own parameter space!&lt;/p&gt;

&lt;p&gt;The article &lt;a href=&quot;https://arxiv.org/pdf/1406.2661.pdf&quot; target=&quot;_blank&quot;&gt;“Generative Adversarial
Nets”&lt;/a&gt; (Goodfellow et al.
2014) proposes a model, where one ANN is trained to generate the sample
distribution, while another is trained to discriminate the artificially
generated samples from truly observed data. Thereby the generative network tries
to fool the discriminative network by increasing it’s proportion of
misclassifications, while the latter tries to decrease it, which is a zero-sum
game.&lt;/p&gt;

&lt;p&gt;Due to this approach GANs by the way solved a further problem of DBMs: Since the
likelihood gradient of DBMs usually is not tractable it has to be estimated,
either by a Markov chain or variational inference. GANs, however, do not require
such estimations. The results are impressive! In particular, the photorelistic
images and videos received much attention with artificially generated
&lt;a href=&quot;https://thispersondoesnotexist.com/&quot; target=&quot;_blank&quot;&gt;faces&lt;/a&gt; and
&lt;a href=&quot;https://www.youtube.com/watch?v=cQ54GDm1eL0&quot; target=&quot;_blank&quot;&gt;lip sync&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;what-will-come-next&quot;&gt;What will come next?&lt;/h2&gt;

&lt;p&gt;The zoo of deep models grows exponentially! Currently we find ourselves
surrounded by many promising approaches, but there is a reason why the two
approaches mentioned above - DBMs and GANs - are of paramount importance: They
have a fundamental and pure character of feuding schools in statistics: The
Bayesians and the Frequentists.&lt;/p&gt;

&lt;p&gt;At this point one could draw parallels to Romeo &amp;amp; Juliet, which raises the idea
to put them together and see what happens. Lo and behold, some people already
did this! First steps in this direction, e.g. &lt;a href=&quot;http://physics.bu.edu/~pankajm/PY895/BEAM.pdf&quot; target=&quot;_blank&quot;&gt;“Boltzmann Encoded Adversarial
Machines”&lt;/a&gt;
(Fisher et al. 2018) impressively demonstrate, that there is a lot of potential
in this fusion! This is not by chance, as both approaches show distinctive
strengths, in structure and representation. So I’ll take the bet: The next big
thing in deep learning is the fusion of GANs and DBMs.&lt;/p&gt;

&lt;p&gt;But let’s extend the projection further into the future. There is one thing that
only received very little attention in deep learning so far: Undirected
graphical models like DBMs have the capability to capture dependency structures,
and not only the boring linear ones, but indeed any sufficiently smooth! This
property, however, has not yet been exploited at all! Why? Simply spoken, there
is a large gap in the literature, as it affects statistics as well as
differential geometry and topology! Nevertheless, I am convinced that the odds
of deep structural inference satisfy to take the efforts to develop a completely
new branch of statistics&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Usually I try not to get mawkish, but the prospects about the AI Revolution
somehow can be overwhelming. And no matter, how important the above aspects will
turn out, after all they will still only represent a tiny chapter within the
long succession of incredible advances, that await us.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Due to the bipartite graph structure of RBMs, repeated Gibbs sampling is rapidly mixing, which allows an efficient approximation of the log-likelihood gradient. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The observables are required to trace out sufficiently smooth and Lipschitz-continuous trajectories. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If you stack bipartite graphs together, you still get a bipartite graph. Of course, it’s a little more complicated, but under the hood that’s the reason, why DBMs can efficiently be trained. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I started the journey, to merge statistics with differential geometry and topology and would be glad, if I could inspire you with my ideas: [&lt;a href=&quot;https://drive.google.com/open?id=1RnRLM7WlSw63zuftRassTI18ohMjr0vE&quot; target=&quot;_blank&quot;&gt;&lt;/a&gt;, &lt;a href=&quot;https://drive.google.com/open?id=1nkNFPLXrAigD3MsETqt5hN9VI94nLvN0&quot; target=&quot;_blank&quot;&gt;&lt;/a&gt;, &lt;a href=&quot;https://drive.google.com/open?id=16gl2GCT5taeH9oo86SHkFKZdeTyRRwTs&quot; target=&quot;_blank&quot;&gt;&lt;/a&gt;, &lt;a href=&quot;https://drive.google.com/open?id=1jssUKKcUFw4LfDiWqjneMKRvVFUmZffP&quot; target=&quot;_blank&quot;&gt;&lt;/a&gt;] &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Patrick Michl</name></author><category term="science" /><category term="Deep Learning" /><category term="Machine Learning" /><category term="AI" /><summary type="html">When talking about the “AI Revolution” it’s difficult to narrow down a common denominator. This is not only because science fiction didn’t prepare us for our first real encounters with AI, but also due to the many and varied accessions, ranging from hopes to fears.</summary></entry></feed>