Skip to content

Vcholerae1/tb-pygimli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tb-pygimli

Torch differentiable ERT forward wrapper for pygimli.physics.ert.ERTModelling.

Install

uv sync

Torch Inversion Example With Visualization

Run a full synthetic inversion with a Torch optimizer and save a result image:

uv run python examples/invert_torch_ert.py

Default output:

  • examples/outputs/torch_ert_inversion.png

The figure includes:

  • data fit (Observed, Torch, ERTManager)
  • optimization history (Torch loss terms)
  • model comparison curve (True vs Torch vs ERTManager)
  • model maps (True, Torch, ERTManager)

You can tune runtime/noise/regularization:

uv run python examples/invert_torch_ert.py --iters 120 --noise-rel 0.01 --lambda-smooth 0.08

You can also tune the ERTManager baseline:

uv run python examples/invert_torch_ert.py --mgr-max-iter 20 --mgr-lam 20

To use a finer parameter mesh (smaller cells), reduce --para-dx:

uv run python examples/invert_torch_ert.py --para-dx 0.15

Real-valued Example

import torch
import pygimli as pg
import pygimli.physics.ert as ert
from tb_pygimli import ERTForward, ert_forward

sensor_pos = pg.utils.grange(0, 10, n=11)
scheme = ert.createData(elecs=sensor_pos, schemeName="dd")
mesh = pg.meshtools.createParaMesh(scheme, paraDepth=5, quality=34.3)
fop = ert.ERTModelling(sr=False, verbose=False)
fop.setData(scheme)
fop.setMesh(mesh)

model = torch.full((int(fop.parameterCount),), 100.0, dtype=torch.float64, requires_grad=True)
response = ert_forward(model, fop)
loss = response.square().mean()
loss.backward()

layer = ERTForward(fop)
response2 = layer(model.detach().clone().requires_grad_(True))

Complex-valued Example

import torch
import pygimli as pg
import pygimli.physics.ert as ert
from tb_pygimli import ert_forward

sensor_pos = pg.utils.grange(0, 10, n=11)
scheme = ert.createData(elecs=sensor_pos, schemeName="dd")
mesh = pg.meshtools.createParaMesh(scheme, paraDepth=5, quality=34.3)
fop = ert.ERTModelling(sr=False, verbose=False)
fop.setData(scheme)
fop.setMesh(mesh)
fop.setComplex(True)

model = torch.full((int(fop.parameterCount) // 2,), 100.0 + 5.0j, dtype=torch.complex128, requires_grad=True)
response = ert_forward(model, fop)
loss = (response.abs() ** 2).mean()
loss.backward()

Constraints and Notes

  • Only first-order gradients are supported.
  • Double backward is not supported.
  • Batch mode assumes all samples share the same fop (same survey/mesh).
  • Caller must initialize fop with setData(...) and setMesh(...) before calling.
  • Shared fop is stateful and should not be used concurrently across threads.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages