Torch-based Inversion & Development Engine
TIDE is a PyTorch-based library for high-frequency electromagnetic wave propagation and inversion, built on Maxwell's equations. It provides CPU and CUDA implementations for forward modeling, gradient computation, and full-waveform inversion workflows.
- Maxwell Equation Solvers:
- 2D TM mode propagation
- 3D Maxwell propagation
- Automatic Differentiation: Gradient support through PyTorch's autograd hooks
- High Performance: Optimized C/CUDA kernels for critical operations
- Flexible Storage: Device/CPU/disk snapshot modes for gradient computation
- Staggered Grid: Industry-standard FDTD staggered grid implementation
- PML Boundaries: Perfectly Matched Layer absorbing boundaries
- Snapshot Compression: Optional BF16 snapshot compression on the default path
| Capability | Entry Point | Status | Notes |
|---|---|---|---|
| 2D TM forward modeling | tide.maxwelltm |
Stable | Primary onboarding path |
| 2D TM inversion / autograd | tide.maxwelltm, MaxwellTM |
Stable | Uses PyTorch autograd |
| 3D forward modeling | tide.maxwell3d |
Stable | Supports component selection |
| 3D inversion / gradients | tide.maxwell3d, Maxwell3D |
Stable with constraints | Check the limitations guide before scaling up |
| Snapshot storage modes | storage_mode=* |
Stable | Device, CPU, disk, none, and auto |
| Callbacks | forward_callback, backward_callback |
Stable | Keep callback work lightweight |
| Debye dispersion | DebyeDispersion |
Advanced | Requires explicit time-step validation |
Ensure you have proper PyTorch installation with CUDA binding for your system.
For CUDA environments, you may need to install a CUDA-enabled PyTorch build first:
uv pip install torch --index-url https://download.pytorch.org/whl/cu128The cu128 tag is for CUDA 12.8. Replace it based on your CUDA version.
Then install TIDE via uv or pip:
uv pip install tide-GPRor
pip install tide-GPRWe recommend using uv for building:
git clone https://github.com/vcholerae1/tide.git
cd tide
uv buildTo rebuild only the native backend during development:
bash scripts/build_csrc.shRequirements:
- Python >= 3.12
- PyTorch >= 2.9.1
- CUDA Toolkit (optional, for GPU support)
- CMake >= 3.28 (optional, for building from source)
import torch
import tide
# Create a simple model
nx, ny = 200, 100
epsilon = torch.ones(ny, nx) * 4.0 # Relative permittivity
sigma = torch.zeros_like(epsilon) # Conductivity (S/m)
mu = torch.ones_like(epsilon) # Relative permeability
epsilon[50:, :] = 9.0 # Add a layer
# Set up source
source_amplitude = tide.ricker(
freq=4e8, # 400 MHz
length=1000,
dt=1e-11,
peak_time=5e-10
).reshape(1, 1, -1)
source_location = torch.tensor([[[10, 100]]], dtype=torch.long)
receiver_location = torch.tensor([[[10, 150]]], dtype=torch.long)
# Run forward simulation
*_, receiver_data = tide.maxwelltm(
epsilon=epsilon,
sigma=sigma,
mu=mu,
grid_spacing=0.01,
dt=1e-11,
source_amplitude=source_amplitude,
source_location=source_location,
receiver_location=receiver_location,
pml_width=10
)
print(f"Recorded data shape: {receiver_data.shape}")tide.maxwelltm: 2D TM solvertide.maxwell3d: 3D solvertide.wavelets: Source wavelet generationtide.callbacks: Callback state and factoriestide.storage: Snapshot storage and compression controlstide.resampling: CFL resampling helperstide.cfl: CFL condition helpertide.padding: Padding and interior masking helperstide.validation: Input validation helpers
Storage and precision controls:
out = tide.maxwelltm(
epsilon,
sigma,
mu,
grid_spacing=0.02,
dt=4e-11,
source_amplitude=src,
source_location=src_loc,
receiver_location=rec_loc,
storage_mode="auto",
storage_compression="bf16",
)Notes:
storage_modeaccepts device, cpu, disk, none, and auto.storage_compressionaccepts none or bf16 for TM2D snapshot storage.
Recommended reading path:
docs/getting-started.mdfor installation and the first 2D forward rundocs/guides/api-orientation.mdfor choosing betweentide.maxwelltm,tide.maxwell3d,MaxwellTM, andMaxwell3Ddocs/guides/modeling.mdanddocs/guides/inversion.mdfor forward modeling and inversion workflowsdocs/guides/configuration.mdfor storage, callbacks, backend, and CFL-related controlsdocs/guides/limitations.mdanddocs/guides/verification.mdbefore enabling advanced features broadly
Run the test suite:
pytest tests/Contributions are welcome! Please feel free to submit a Pull Request.
This project includes code derived from Deepwave by Alan Richardson. We gratefully acknowledge the foundational work that made TIDE possible.
If you use TIDE in your research, please cite:
@software{tide2025,
author = {Vcholerae1},
title = {TIDE: Torch-based Inversion \& Development Engine},
year = {2025},
url = {https://github.com/vcholerae1/tide}
}This project is licensed under the MIT License - see the LICENSE file for details.