Skip to content

Commit

Permalink
Deploying to gh-pages from @ 250ea21 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
justinplakoo committed Feb 25, 2025
1 parent e63cede commit 6cdfd3f
Show file tree
Hide file tree
Showing 21 changed files with 393 additions and 1,304 deletions.
150 changes: 4 additions & 146 deletions _modules/deel/torchlip/functional.html

Large diffs are not rendered by default.

460 changes: 0 additions & 460 deletions _modules/deel/torchlip/init.html

This file was deleted.

4 changes: 2 additions & 2 deletions _modules/deel/torchlip/modules/activation.html
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ <h1>Source code for deel.torchlip.modules.activation</h1><div class="highlight">
<span class="k">return</span> <span class="n">layer</span></div>


<span class="k">class</span><span class="w"> </span><span class="nc">HouseHolder</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
<div class="viewcode-block" id="HouseHolder"><a class="viewcode-back" href="../../../../deel.torchlip.html#deel.torchlip.HouseHolder">[docs]</a><span class="k">class</span><span class="w"> </span><span class="nc">HouseHolder</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
<span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">k_coef_lip</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">,</span> <span class="n">theta_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Householder activation:</span>
Expand Down Expand Up @@ -461,7 +461,7 @@ <h1>Source code for deel.torchlip.modules.activation</h1><div class="highlight">
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">cat</span><span class="p">([</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">],</span> <span class="n">dim</span><span class="o">=</span><span class="n">axis</span><span class="p">)</span>

<span class="k">def</span><span class="w"> </span><span class="nf">vanilla_export</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span>
<span class="k">return</span> <span class="bp">self</span></div>
</pre></div>

</article>
Expand Down
17 changes: 4 additions & 13 deletions _modules/deel/torchlip/modules/conv.html
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ <h1>Source code for deel.torchlip.modules.conv</h1><div class="highlight"><pre>
<span class="kn">from</span><span class="w"> </span><span class="nn">.module</span><span class="w"> </span><span class="kn">import</span> <span class="n">LipschitzModule</span>


<span class="k">class</span><span class="w"> </span><span class="nc">SpectralConv1d</span><span class="p">(</span><span class="n">PadConv1d</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
<div class="viewcode-block" id="SpectralConv1d"><a class="viewcode-back" href="../../../../deel.torchlip.html#deel.torchlip.SpectralConv1d">[docs]</a><span class="k">class</span><span class="w"> </span><span class="nc">SpectralConv1d</span><span class="p">(</span><span class="n">PadConv1d</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
<span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span>
Expand Down Expand Up @@ -283,10 +283,6 @@ <h1>Source code for deel.torchlip.modules.conv</h1><div class="highlight"><pre>

<span class="sd"> This documentation reuse the body of the original torch.nn.Conv1D doc.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="c1"># if not ((dilation == (1, 1)) or (dilation == [1, 1]) or (dilation == 1)):</span>
<span class="c1"># raise RuntimeError(&quot;NormalizedConv does not support dilation rate&quot;)</span>
<span class="c1"># if padding_mode != &quot;same&quot;:</span>
<span class="c1"># raise RuntimeError(&quot;NormalizedConv only support padding=&#39;same&#39;&quot;)</span>

<span class="n">PadConv1d</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
Expand Down Expand Up @@ -316,7 +312,7 @@ <h1>Source code for deel.torchlip.modules.conv</h1><div class="highlight"><pre>
<span class="bp">self</span><span class="o">.</span><span class="n">apply_lipschitz_factor</span><span class="p">()</span>

<span class="k">def</span><span class="w"> </span><span class="nf">vanilla_export</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="n">PadConv1d</span><span class="o">.</span><span class="n">vanilla_export</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
<span class="k">return</span> <span class="n">PadConv1d</span><span class="o">.</span><span class="n">vanilla_export</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span></div>


<div class="viewcode-block" id="SpectralConv2d"><a class="viewcode-back" href="../../../../deel.torchlip.html#deel.torchlip.SpectralConv2d">[docs]</a><span class="k">class</span><span class="w"> </span><span class="nc">SpectralConv2d</span><span class="p">(</span><span class="n">PadConv2d</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
Expand Down Expand Up @@ -354,7 +350,8 @@ <h1>Source code for deel.torchlip.modules.conv</h1><div class="highlight"><pre>
<span class="sd"> padding (int or tuple, optional): Zero-padding added to both sides of</span>
<span class="sd"> the input.</span>
<span class="sd"> padding_mode (string, optional): ``&#39;zeros&#39;``, ``&#39;reflect&#39;``,</span>
<span class="sd"> ``&#39;replicate&#39;`` or ``&#39;circular&#39;``. Default: ``&#39;zeros&#39;``</span>
<span class="sd"> ``&#39;replicate&#39;``, ``&#39;symmetric&#39;`` or ``&#39;circular&#39;``.</span>
<span class="sd"> Default: ``&#39;zeros&#39;``</span>
<span class="sd"> dilation (int or tuple, optional): Spacing between kernel elements.</span>
<span class="sd"> Has to be one</span>
<span class="sd"> groups (int, optional): Number of blocked connections from input</span>
Expand All @@ -367,10 +364,6 @@ <h1>Source code for deel.torchlip.modules.conv</h1><div class="highlight"><pre>

<span class="sd"> This documentation reuse the body of the original torch.nn.Conv2D doc.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="c1"># if not ((dilation == (1, 1)) or (dilation == [1, 1]) or (dilation == 1)):</span>
<span class="c1"># raise RuntimeError(&quot;NormalizedConv does not support dilation rate&quot;)</span>
<span class="c1"># if padding_mode != &quot;same&quot;:</span>
<span class="c1"># raise RuntimeError(&quot;NormalizedConv only support padding=&#39;same&#39;&quot;)</span>

<span class="n">PadConv2d</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
Expand Down Expand Up @@ -423,8 +416,6 @@ <h1>Source code for deel.torchlip.modules.conv</h1><div class="highlight"><pre>
<span class="p">):</span>
<span class="k">if</span> <span class="n">np</span><span class="o">.</span><span class="n">prod</span><span class="p">([</span><span class="n">stride</span><span class="p">])</span> <span class="o">!=</span> <span class="mi">1</span><span class="p">:</span>
<span class="k">raise</span> <span class="ne">RuntimeError</span><span class="p">(</span><span class="s2">&quot;FrobeniusConv2d does not support strides&quot;</span><span class="p">)</span>
<span class="c1"># if padding_mode != &quot;same&quot;:</span>
<span class="c1"># raise RuntimeError(&quot;NormalizedConv only support padding=&#39;same&#39;&quot;)</span>

<span class="n">PadConv2d</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
Expand Down
7 changes: 7 additions & 0 deletions _modules/deel/torchlip/modules/downsampling.html
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,13 @@ <h1>Source code for deel.torchlip.modules.downsampling</h1><div class="highlight


<div class="viewcode-block" id="InvertibleDownSampling"><a class="viewcode-back" href="../../../../deel.torchlip.html#deel.torchlip.InvertibleDownSampling">[docs]</a><span class="k">class</span><span class="w"> </span><span class="nc">InvertibleDownSampling</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">PixelUnshuffle</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> A combination of torch.nn.PixelUnshuffle and LipschitzModule.</span>
<span class="sd"> This module is used to downsample the input tensor by a factor of kernel_size.</span>
<span class="sd"> The resulting output tensor has kernel_size^2 times more channels</span>
<span class="sd"> than the input tensor.</span>
<span class="sd"> &quot;&quot;&quot;</span>

<span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">k_coef_lip</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">):</span>
<span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">PixelUnshuffle</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">downscale_factor</span><span class="o">=</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="n">LipschitzModule</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">k_coef_lip</span><span class="p">)</span>
Expand Down
9 changes: 6 additions & 3 deletions _modules/deel/torchlip/modules/loss.html
Original file line number Diff line number Diff line change
Expand Up @@ -645,7 +645,8 @@ <h1>Source code for deel.torchlip.modules.loss</h1><div class="highlight"><pre>
<span class="p">)</span> <span class="o">-&gt;</span> <span class="kc">None</span><span class="p">:</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> The loss add a temperature (tau) factor to the CrossEntropyLoss</span>
<span class="sd"> CrossEntropyLoss(tau * input, target)</span>
<span class="sd"> CrossEntropyLoss(tau * input, target).</span>

<span class="sd"> See `CrossEntropyLoss` for more details on arguments.</span>

<span class="sd"> Args:</span>
Expand Down Expand Up @@ -681,7 +682,8 @@ <h1>Source code for deel.torchlip.modules.loss</h1><div class="highlight"><pre>
<span class="p">)</span> <span class="o">-&gt;</span> <span class="kc">None</span><span class="p">:</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> The loss add a temperature (tau) factor to the BCEWithLogitsLoss</span>
<span class="sd"> BCEWithLogitsLoss(tau * input, target)</span>
<span class="sd"> BCEWithLogitsLoss(tau * input, target).</span>

<span class="sd"> See `BCEWithLogitsLoss` for more details on arguments.</span>

<span class="sd"> Args:</span>
Expand Down Expand Up @@ -712,7 +714,8 @@ <h1>Source code for deel.torchlip.modules.loss</h1><div class="highlight"><pre>

<span class="sd"> `target` and `input` must be of shape (batch_size, # classes).</span>
<span class="sd"> Note that `target` should be one-hot encoded, +/-1 values.</span>
<span class="sd"> ReLU(min_margin - (input[target&gt;0] - max(input[target&lt;=0])))</span>
<span class="sd"> :math:`\\text{ReLU}(\\text{min\\_margin} - (\\text{input}[\\text{target}&gt;0]</span>
<span class="sd"> - \\text{max}(\\text{input}[\\text{target}&lt;=0])))`</span>
<span class="sd"> is computed element-wise and averaged over the batch.</span>

<span class="sd"> Args:</span>
Expand Down
2 changes: 1 addition & 1 deletion _modules/deel/torchlip/modules/pooling.html
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ <h1>Source code for deel.torchlip.modules.pooling</h1><div class="highlight"><pr
<span class="n">k_coef_lip</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">,</span>
<span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Applies a 2D adaptive max pooling over an input signal composed of several</span>
<span class="sd"> Applies a 2D adaptive average pooling over an input signal composed of several</span>
<span class="sd"> input planes.</span>

<span class="sd"> The output is of size H x W, for any input size.</span>
Expand Down
7 changes: 7 additions & 0 deletions _modules/deel/torchlip/modules/upsampling.html
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,13 @@ <h1>Source code for deel.torchlip.modules.upsampling</h1><div class="highlight">


<div class="viewcode-block" id="InvertibleUpSampling"><a class="viewcode-back" href="../../../../deel.torchlip.html#deel.torchlip.InvertibleUpSampling">[docs]</a><span class="k">class</span><span class="w"> </span><span class="nc">InvertibleUpSampling</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">PixelShuffle</span><span class="p">,</span> <span class="n">LipschitzModule</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> A combination of torch.nn.PixelShuffle and LipschitzModule.</span>
<span class="sd"> This module is used to upsample the input tensor by a factor of kernel_size.</span>
<span class="sd"> The resulting output tensor has kernel_size^2 times less channels</span>
<span class="sd"> than the input tensor.</span>
<span class="sd"> &quot;&quot;&quot;</span>

<span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">:</span> <span class="nb">int</span><span class="p">,</span> <span class="n">k_coef_lip</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">):</span>
<span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">PixelShuffle</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">upscale_factor</span><span class="o">=</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="n">LipschitzModule</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">k_coef_lip</span><span class="p">)</span>
Expand Down
1 change: 0 additions & 1 deletion _modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,6 @@

<h1>All modules for which code is available</h1>
<ul><li><a href="deel/torchlip/functional.html">deel.torchlip.functional</a></li>
<li><a href="deel/torchlip/init.html">deel.torchlip.init</a></li>
<li><a href="deel/torchlip/modules/activation.html">deel.torchlip.modules.activation</a></li>
<li><a href="deel/torchlip/modules/conv.html">deel.torchlip.modules.conv</a></li>
<li><a href="deel/torchlip/modules/downsampling.html">deel.torchlip.modules.downsampling</a></li>
Expand Down
11 changes: 8 additions & 3 deletions _sources/basic_example.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ For instance, the :class:`SpectralLinear` module is simply a :class:`torch.nn.Li
torch.nn.init.orthogonal_(m.weight)
m.bias.data.fill_(0.0)
torch.nn.utils.spectral_norm(m, "weight", 3)
torchlip.utils.bjorck_norm(m, "weight", 15)
torch.nn.utils.spectral_norm(m, "weight", eps=1e-3)
torchlip.utils.bjorck_norm(m, "weight", eps=1e-3)
The following table indicates which module are safe to use in a Lipschitz network, and which are not.
Expand All @@ -48,6 +48,10 @@ The following table indicates which module are safe to use in a Lipschitz networ
- no
- :class:`.SpectralConv2d` \ :raw-html-m2r:`<br>`\ :class:`.FrobeniusConv2d`
- :class:`.SpectralConv2d` also implements Björck normalization.
* - :class:`torch.nn.Conv1d`
- no
- :class:`.SpectralConv1d`
- :class:`.SpectralConv1d` also implements Björck normalization.
* - :class:`MaxPooling`\ :raw-html-m2r:`<br>`\ :class:`GlobalMaxPooling`
- yes
- n/a
Expand Down Expand Up @@ -111,11 +115,12 @@ Here is a simple example showing how to build a 1-Lipschitz network:
# binary classification with -1 and +1 labels to the target
# must be fixed from the dataset.
optimizer = torch.optim.Adam(lr=0.01, params=model.parameters())
hkr_loss = HKRLoss(alpha=10, min_margin=1)
for data, target in mnist_08:
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = torchlip.functional.hkr_loss(output, target, alpha=10, min_margin=1)
loss = hkr_loss(output, target)
loss.backward()
optimizer.step()
Expand Down
10 changes: 9 additions & 1 deletion _sources/deel.torchlip.functional.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,10 @@ Non-linear activation functions
~~~~~~~~~~~~~~~~

.. autofunction:: lipschitz_prelu


Padding functions
-------------------------------
.. autoclass:: SymmetricPad

Loss functions
--------------
Expand All @@ -43,3 +46,8 @@ Loss functions

.. autofunction:: hinge_multiclass_loss
.. autofunction:: hkr_multiclass_loss

:hidden:`others`
~~~~~~~~~~~~~~~~

.. autofunction:: process_labels_for_multi_gpu
20 changes: 0 additions & 20 deletions _sources/deel.torchlip.init.rst.txt

This file was deleted.

29 changes: 17 additions & 12 deletions _sources/deel.torchlip.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,7 @@ deel.torchlip

.. currentmodule:: deel.torchlip

.. toctree::
:maxdepth: 4

deel.torchlip.utils
deel.torchlip.functional
deel.torchlip.init
deel.torchlip.normalizers

Containers
----------
Expand All @@ -26,19 +20,26 @@ Containers
.. autoclass:: Sequential


Linear Layers
-------------

.. autoclass:: SpectralLinear
.. autoclass:: FrobeniusLinear

Convolution Layers
------------------

.. autoclass:: SpectralConv1d
.. autoclass:: SpectralConv2d
.. autoclass:: FrobeniusConv2d
.. autoclass:: SpectralConvTranspose2d

Pooling Layers
--------------

.. autoclass:: ScaledAdaptiveAvgPool2d
.. autoclass:: ScaledAvgPool2d
.. autoclass:: ScaledL2NormPool2d
.. autoclass:: ScaledAdaptiveAvgPool2d
.. autoclass:: ScaledAdaptativeL2NormPool2d
.. autoclass:: InvertibleDownSampling
.. autoclass:: InvertibleUpSampling
Expand All @@ -51,12 +52,8 @@ Non-linear Activations
.. autoclass:: GroupSort2
.. autoclass:: FullSort
.. autoclass:: LPReLU
.. autoclass:: HouseHolder

Linear Layers
-------------

.. autoclass:: SpectralLinear
.. autoclass:: FrobeniusLinear

Loss Functions
--------------
Expand All @@ -70,3 +67,11 @@ Loss Functions
.. autoclass:: TauCrossEntropyLoss
.. autoclass:: TauBCEWithLogitsLoss
.. autoclass:: CategoricalHingeLoss


.. toctree::
:maxdepth: 4

deel.torchlip.utils
deel.torchlip.functional
deel.torchlip.normalizers
Loading

0 comments on commit 6cdfd3f

Please sign in to comment.