Torch differentiable ERT forward wrapper for pygimli.physics.ert.ERTModelling.
uv syncRun a full synthetic inversion with a Torch optimizer and save a result image:
uv run python examples/invert_torch_ert.pyDefault output:
examples/outputs/torch_ert_inversion.png
The figure includes:
- data fit (
Observed,Torch,ERTManager) - optimization history (Torch loss terms)
- model comparison curve (
TruevsTorchvsERTManager) - model maps (
True,Torch,ERTManager)
You can tune runtime/noise/regularization:
uv run python examples/invert_torch_ert.py --iters 120 --noise-rel 0.01 --lambda-smooth 0.08You can also tune the ERTManager baseline:
uv run python examples/invert_torch_ert.py --mgr-max-iter 20 --mgr-lam 20To use a finer parameter mesh (smaller cells), reduce --para-dx:
uv run python examples/invert_torch_ert.py --para-dx 0.15import torch
import pygimli as pg
import pygimli.physics.ert as ert
from tb_pygimli import ERTForward, ert_forward
sensor_pos = pg.utils.grange(0, 10, n=11)
scheme = ert.createData(elecs=sensor_pos, schemeName="dd")
mesh = pg.meshtools.createParaMesh(scheme, paraDepth=5, quality=34.3)
fop = ert.ERTModelling(sr=False, verbose=False)
fop.setData(scheme)
fop.setMesh(mesh)
model = torch.full((int(fop.parameterCount),), 100.0, dtype=torch.float64, requires_grad=True)
response = ert_forward(model, fop)
loss = response.square().mean()
loss.backward()
layer = ERTForward(fop)
response2 = layer(model.detach().clone().requires_grad_(True))import torch
import pygimli as pg
import pygimli.physics.ert as ert
from tb_pygimli import ert_forward
sensor_pos = pg.utils.grange(0, 10, n=11)
scheme = ert.createData(elecs=sensor_pos, schemeName="dd")
mesh = pg.meshtools.createParaMesh(scheme, paraDepth=5, quality=34.3)
fop = ert.ERTModelling(sr=False, verbose=False)
fop.setData(scheme)
fop.setMesh(mesh)
fop.setComplex(True)
model = torch.full((int(fop.parameterCount) // 2,), 100.0 + 5.0j, dtype=torch.complex128, requires_grad=True)
response = ert_forward(model, fop)
loss = (response.abs() ** 2).mean()
loss.backward()- Only first-order gradients are supported.
- Double backward is not supported.
- Batch mode assumes all samples share the same
fop(same survey/mesh). - Caller must initialize
fopwithsetData(...)andsetMesh(...)before calling. - Shared
fopis stateful and should not be used concurrently across threads.