You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<spanclass="sig-prename descclassname"><spanclass="pre">torchjd.autojac.</span></span><spanclass="sig-name descname"><spanclass="pre">jac</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">outputs</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">inputs</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">None</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">retain_graph</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">False</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">parallel_chunk_size</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">None</span></span></em><spanclass="sig-paren">)</span><aclass="reference external" href="https://github.com/TorchJD/torchjd/blob/main/src/torchjd/autojac/_jac.py#L18-L134"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torchjd.autojac.jac" title="Link to this definition">¶</a></dt>
298
+
<spanclass="sig-prename descclassname"><spanclass="pre">torchjd.autojac.</span></span><spanclass="sig-name descname"><spanclass="pre">jac</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">outputs</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">inputs</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">None</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">retain_graph</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">False</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">parallel_chunk_size</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">None</span></span></em><spanclass="sig-paren">)</span><aclass="reference external" href="https://github.com/TorchJD/torchjd/blob/main/src/torchjd/autojac/_jac.py#L17-L133"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torchjd.autojac.jac" title="Link to this definition">¶</a></dt>
299
299
<dd><p>Computes the Jacobian of all values in <codeclass="docutils literal notranslate"><spanclass="pre">outputs</span></code> with respect to all <codeclass="docutils literal notranslate"><spanclass="pre">inputs</span></code>. Returns the
300
300
result as a tuple, with one Jacobian per input tensor. The returned Jacobian with respect to
301
301
input <codeclass="docutils literal notranslate"><spanclass="pre">t</span></code> has shape <codeclass="docutils literal notranslate"><spanclass="pre">[m]</span><spanclass="pre">+</span><spanclass="pre">t.shape</span></code>.</p>
@@ -304,7 +304,7 @@ <h1>jac<a class="headerlink" href="#jac" title="Link to this heading">¶</a></h1
304
304
<ddclass="field-odd"><ulclass="simple">
305
305
<li><p><strong>outputs</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Sequence</span></code></a>[<aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a>] | <aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a></span>) – The tensor or tensors to differentiate. Should be non-empty. The Jacobians will
306
306
have one row for each value of each of these tensors.</p></li>
307
-
<li><p><strong>inputs</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/typing.html#typing.Iterable" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Iterable</span></code></a>[<aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a>] | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The tensors with respect to which the Jacobian must be computed. These must have
307
+
<li><p><strong>inputs</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Iterable</span></code></a>[<aclass="reference external" href="https://docs.pytorch.org/docs/stable/tensors.html#torch.Tensor" title="(in PyTorch v2.10)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">Tensor</span></code></a>] | <aclass="reference external" href="https://docs.python.org/3/library/constants.html#None" title="(in Python v3.14)"><codeclass="xref py py-obj docutils literal notranslate"><spanclass="pre">None</span></code></a></span>) – The tensors with respect to which the Jacobian must be computed. These must have
308
308
their <codeclass="docutils literal notranslate"><spanclass="pre">requires_grad</span></code> flag set to <codeclass="docutils literal notranslate"><spanclass="pre">True</span></code>. If not provided, defaults to the leaf tensors
309
309
that were used to compute the <codeclass="docutils literal notranslate"><spanclass="pre">outputs</span></code> parameter.</p></li>
310
310
<li><p><strong>retain_graph</strong> (<spanclass="sphinx_autodoc_typehints-type"><aclass="reference external" href="https://docs.python.org/3/library/functions.html#bool" title="(in Python v3.14)"><codeclass="xref py py-class docutils literal notranslate"><spanclass="pre">bool</span></code></a></span>) – If <codeclass="docutils literal notranslate"><spanclass="pre">False</span></code>, the graph used to compute the grad will be freed. Defaults to
0 commit comments