Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
7c61bf5
Updated for latest release of metatrain
HowWeiBin May 13, 2025
4423ce0
Fixed gradient not taking into account excess target
HowWeiBin May 22, 2025
4ac0877
Fixed eval function
HowWeiBin May 31, 2025
ea968a7
Fixed sampler bug
HowWeiBin Jun 2, 2025
56ed4d8
num_subtarget > 1 for LLPR
SanggyuChong Jun 2, 2025
5aba703
remove debug prints, update to metatomic
SanggyuChong Jun 2, 2025
c2aac9b
Optimized gradient calculation and now reports mean loss instead of t…
HowWeiBin Jun 3, 2025
c98f669
Fixed Training loss not using mean losses but total losses
HowWeiBin Jun 3, 2025
f2279e4
Fixed device mismatch
HowWeiBin Jun 3, 2025
bed409a
Trying to fixc issue with evaluation script
HowWeiBin Jun 4, 2025
7eedbbc
Fixed validation criteria
HowWeiBin Jun 4, 2025
d6d523d
Fixed restart issue
HowWeiBin Jun 4, 2025
5a76632
Modified trainer to save validation set
HowWeiBin Jun 7, 2025
5978c42
Fixed validation set savepath
HowWeiBin Jun 7, 2025
039e9b1
Removed debug message
HowWeiBin Jun 7, 2025
ddab09f
Detach predictions for memory purposes
HowWeiBin Jun 7, 2025
b4b8256
Removed saving of vaLidation predictions
HowWeiBin Jun 8, 2025
26557d4
account for model device, batch shift calculation in LLPR
SanggyuChong Jun 8, 2025
3d33024
Merge branch 'pet-dos' of github.com:metatensor/metatrain into pet-dos
SanggyuChong Jun 8, 2025
d3002d1
Allowed the option to not have a permanent subset
HowWeiBin Jun 8, 2025
498118f
Removed reading target during eval.py since evaluation is disabled an…
HowWeiBin Jun 9, 2025
edabf67
fix shift bug in LLPR for DOS
SanggyuChong Jun 24, 2025
330d7ba
Merge branch 'pet-dos' of github.com:metatensor/metatrain into pet-dos
SanggyuChong Jun 24, 2025
6bf6899
fix bug with device and dtype
SanggyuChong Jun 24, 2025
46416e4
bugfix for multigpu
HowWeiBin Jun 28, 2025
d1a300d
Fix code to not make model output everything
HowWeiBin Jul 4, 2025
d74f190
Further fixes to make the model output only necessary stuff
HowWeiBin Jul 4, 2025
e71d368
very rough prototyping to make LLPR ensemble work for DOS
SanggyuChong Jul 23, 2025
b520388
revamp ensemble to be part of model
SanggyuChong Aug 12, 2025
d1be1c9
bug fixes with dimensions
SanggyuChong Aug 12, 2025
d52f4ab
bug fixes to make llpr ensemble --> model actually work
SanggyuChong Aug 12, 2025
2a7c8dd
implement custom loss for prototyping
SanggyuChong Aug 13, 2025
473d605
create separate trainer file for prototyping (hopefully shouldn't aff…
SanggyuChong Aug 13, 2025
93731a4
conditional training routine that invokes recalibration with appropri…
SanggyuChong Aug 13, 2025
a076603
expose model features
SanggyuChong Sep 8, 2025
b96cb22
checkpoint compatible
SanggyuChong Sep 8, 2025
5f5d73f
Update to metatomic
HowWeiBin Oct 6, 2025
7051be6
Hotfix for finetuning from different model
HowWeiBin Oct 6, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/advanced-concepts/auxiliary-outputs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,4 +86,4 @@ features

See the
`feature output <https://docs.metatensor.org/latest/atomistic/outputs/features.html>`_
in ``metatensor.torch.atomistic``.
in ``metatomic.torch``.
2 changes: 1 addition & 1 deletion docs/src/advanced-concepts/output-naming.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Output naming
=============

The name and format of the outputs in ``metatrain`` are based on
those of the `<metatensor.torch.atomistic
those of the `<metatomic.torch
https://docs.metatensor.org/latest/atomistic/outputs/index.html>`_
package. An immediate example is given by the ``energy`` output.

Expand Down
8 changes: 4 additions & 4 deletions docs/src/dev-docs/new-architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ method.

.. code-block:: python

from metatensor.torch.atomistic import MetatensorAtomisticModel, ModelMetadata
from metatomic.torch import AtomisticModel, ModelMetadata

class ModelInterface:

Expand Down Expand Up @@ -146,7 +146,7 @@ method.

def export(
self, metadata: Optional[ModelMetadata] = None
) -> MetatensorAtomisticModel:
) -> AtomisticModel:
pass

Note that the ``ModelInterface`` does not necessarily inherit from
Expand All @@ -165,8 +165,8 @@ the ``architecture`` key should contain references about the general architectur
The ``export()`` method is required to transform a trained model into a standalone file
to be used in combination with molecular dynamic engines to run simulations. We provide
a helper function :py:func:`metatrain.utils.export.export` to export a torch
model to an :py:class:`MetatensorAtomisticModel
<metatensor.torch.atomistic.MetatensorAtomisticModel>`.
model to an :py:class:`AtomisticModel
<metatomic.torch.AtomisticModel>`.

Trainer class (``trainer.py``)
------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/src/dev-docs/utils/data/systems_to_ase.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Converting Systems to ASE
#########################

Some machine learning models might train on ``ase.Atoms`` objects.
This module provides a function to convert a ``metatensor.torch.atomistic.System``
This module provides a function to convert a ``metatomic.torch.System``
object to an ``ase.Atoms`` object.

.. automodule:: metatrain.utils.data.system_to_ase
Expand Down
2 changes: 1 addition & 1 deletion docs/src/dev-docs/utils/neighbor_lists.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Neighbor lists
==============

Utilities to attach neighbor lists to a ``metatensor.torch.atomistic.System`` object.
Utilities to attach neighbor lists to a ``metatomic.torch.System`` object.

.. automodule:: metatrain.utils.neighbor_lists
:members:
Expand Down
2 changes: 1 addition & 1 deletion docs/src/getting-started/checkpoints.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ The ``metadata.yaml`` file should have the following structure:

You can also add additional keywords like additional references to the metadata file.
The fields are the same for :class:`ModelMetadata
<metatensor.torch.atomistic.ModelMetadata>` class from metatensor.
<metatomic.torch.ModelMetadata>` class from metatensor.

Exporting remote models
-----------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/src/getting-started/custom_dataset_conf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Allows defining multiple target sections, each with a unique name.
and ``stress`` are enabled by default.
- Other target sections can also be defined, as long as they are prefixed by ``mtt::``.
For example, ``mtt::free_energy``. In general, all targets that are not standard
outputs of ``metatensor.torch.atomistic`` (see
outputs of ``metatomic.torch`` (see
https://docs.metatensor.org/latest/atomistic/outputs.html) should be prefixed by
``mtt::``.

Expand Down
2 changes: 1 addition & 1 deletion examples/ase/run_ase.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
import matplotlib.pyplot as plt
import numpy as np
from ase.geometry.analysis import Analysis
from metatensor.torch.atomistic.ase_calculator import MetatensorCalculator
from metatomic.torch.ase_calculator import MetatensorCalculator


# %%
Expand Down
2 changes: 1 addition & 1 deletion examples/programmatic/disk_dataset/disk_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import ase.io
import torch
from metatensor.torch import Labels, TensorBlock, TensorMap
from metatensor.torch.atomistic import NeighborListOptions, systems_to_torch
from metatomic.torch import NeighborListOptions, systems_to_torch

from metatrain.utils.data import DiskDatasetWriter
from metatrain.utils.neighbor_lists import get_system_with_neighbor_lists
Expand Down
8 changes: 4 additions & 4 deletions examples/programmatic/llpr/llpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,8 @@
# to compute prediction rigidity metrics, which are useful for uncertainty
# quantification and model introspection.

from metatensor.torch.atomistic import ( # noqa: E402
MetatensorAtomisticModel,
from metatomic.torch import ( # noqa: E402
AtomisticModel,
ModelMetadata,
)

Expand All @@ -127,7 +127,7 @@
# calibration/validation dataset should be used.
llpr_model.calibrate(dataloader)

exported_model = MetatensorAtomisticModel(
exported_model = AtomisticModel(
llpr_model.eval(),
ModelMetadata(),
llpr_model.capabilities,
Expand All @@ -140,7 +140,7 @@
# specific outputs from the model. In this case, we request the uncertainty in the
# atomic energy predictions.

from metatensor.torch.atomistic import ModelEvaluationOptions, ModelOutput # noqa: E402
from metatomic.torch import ModelEvaluationOptions, ModelOutput # noqa: E402


evaluation_options = ModelEvaluationOptions(
Expand Down
6 changes: 3 additions & 3 deletions examples/programmatic/llpr_forces/force_llpr.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import matplotlib.pyplot as plt
import numpy as np
import torch
from metatensor.torch.atomistic import (
MetatensorAtomisticModel,
from metatomic.torch import (
AtomisticModel,
ModelEvaluationOptions,
ModelMetadata,
ModelOutput,
Expand Down Expand Up @@ -163,7 +163,7 @@
llpr_model.compute_inverse_covariance()
llpr_model.calibrate(valid_dataloader)

exported_model = MetatensorAtomisticModel(
exported_model = AtomisticModel(
llpr_model.eval(),
ModelMetadata(),
llpr_model.capabilities,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
#

import torch
from metatensor.torch.atomistic import ModelOutput
from metatomic.torch import ModelOutput

from metatrain.experimental.nanopet import NanoPET
from metatrain.utils.architectures import get_default_hypers
Expand Down
2 changes: 1 addition & 1 deletion examples/zbl/dimers.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
import matplotlib.pyplot as plt
import numpy as np
import torch
from metatensor.torch.atomistic.ase_calculator import MetatensorCalculator
from metatomic.torch.ase_calculator import MetatensorCalculator


# %%
Expand Down
10 changes: 6 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ authors = [{name = "metatrain developers"}]
# Strict version pinning to avoid regression test failing on new versions
dependencies = [
"ase",
"metatensor-learn==0.3.2",
"metatensor-operations==0.3.3",
"metatensor-torch==0.7.6",
"huggingface_hub",
"metatensor-learn >=0.3.2,<0.4",
"metatensor-operations >=0.3.3,<0.4",
"metatensor-torch >=0.7.6,<0.8",
"metatomic-torch >=0.1.2,<0.2",
"jsonschema",
"omegaconf",
"omegaconf >= 2.3.0",
"python-hostlist",
"vesin",
]
Expand Down
50 changes: 26 additions & 24 deletions src/metatrain/cli/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import numpy as np
import torch
from metatensor.torch import Labels, TensorBlock, TensorMap
from metatensor.torch.atomistic import MetatensorAtomisticModel
from metatomic.torch import AtomisticModel
from omegaconf import DictConfig, OmegaConf

from ..utils.data import (
Expand Down Expand Up @@ -167,7 +167,7 @@ def _concatenate_tensormaps(


def _eval_targets(
model: Union[MetatensorAtomisticModel, torch.jit._script.RecursiveScriptModule],
model: Union[AtomisticModel, torch.jit._script.RecursiveScriptModule],
dataset: Union[Dataset, torch.utils.data.Subset],
options: Dict[str, TargetInfo],
return_predictions: bool,
Expand Down Expand Up @@ -220,10 +220,10 @@ def _eval_targets(
collate_fn=collate_fn,
shuffle=False,
)

# Initialize RMSE accumulator:
rmse_accumulator = RMSEAccumulator()
mae_accumulator = MAEAccumulator()
# Not initializing the accumulator
# Initialize RMSE accumulator:
# rmse_accumulator = RMSEAccumulator()
# mae_accumulator = MAEAccumulator()

# If we're returning the predictions, we need to store them:
if return_predictions:
Expand Down Expand Up @@ -265,7 +265,7 @@ def _eval_targets(
model,
systems,
options,
is_training=False,
is_training=False,
check_consistency=check_consistency,
)

Expand All @@ -279,27 +279,29 @@ def _eval_targets(
batch_targets_per_atom = average_by_num_atoms(
batch_targets, systems, per_structure_keys=[]
)
rmse_accumulator.update(batch_predictions_per_atom, batch_targets_per_atom)
mae_accumulator.update(batch_predictions_per_atom, batch_targets_per_atom)
# CHANGE: Do not calculate the loss because it currently does not support arbitrary loss functions
# rmse_accumulator.update(batch_predictions_per_atom, batch_targets_per_atom)
# mae_accumulator.update(batch_predictions_per_atom, batch_targets_per_atom)
if return_predictions:
all_predictions.append(batch_predictions)

time_taken = end_time - start_time
total_time += time_taken
timings_per_atom.append(time_taken / sum(len(system) for system in systems))

# CHANGE: Do not calculate the loss because it currently does not support arbitrary loss functions
# Finalize the metrics
rmse_values = rmse_accumulator.finalize(not_per_atom=["positions_gradients"])
mae_values = mae_accumulator.finalize(not_per_atom=["positions_gradients"])
metrics = {**rmse_values, **mae_values}

# print the RMSEs with MetricLogger
metric_logger = MetricLogger(
log_obj=logger,
dataset_info=model.capabilities(),
initial_metrics=metrics,
)
metric_logger.log(metrics)
# rmse_values = rmse_accumulator.finalize(not_per_atom=["positions_gradients"])
# mae_values = mae_accumulator.finalize(not_per_atom=["positions_gradients"])
# metrics = {**rmse_values, **mae_values}

# # print the RMSEs with MetricLogger
# metric_logger = MetricLogger(
# log_obj=logger,
# dataset_info=model.capabilities(),
# initial_metrics=metrics,
# )
# metric_logger.log(metrics)

# Log timings
timings_per_atom = np.array(timings_per_atom)
Expand All @@ -320,7 +322,7 @@ def _eval_targets(


def eval_model(
model: Union[MetatensorAtomisticModel, torch.jit._script.RecursiveScriptModule],
model: Union[AtomisticModel, torch.jit._script.RecursiveScriptModule],
options: DictConfig,
output: Union[Path, str] = "output.xyz",
batch_size: int = 1,
Expand Down Expand Up @@ -362,9 +364,9 @@ def eval_model(
# and we calculate RMSEs
eval_targets, eval_info_dict = read_targets(options["targets"])
else:
# in this case, we have no targets: we evaluate everything
# (but we don't/can't calculate RMSEs)
# TODO: allow the user to specify which outputs to evaluate
# in this case, we have no targets: we evaluate everything
# (but we don't/can't calculate RMSEs)
# TODO: allow the user to specify which outputs to evaluate
eval_targets = {}
eval_info_dict = {}
do_strain_grad = all(
Expand Down
2 changes: 1 addition & 1 deletion src/metatrain/cli/export.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from typing import Any, Optional, Union

import torch
from metatensor.torch.atomistic import ModelMetadata, is_atomistic_model
from metatomic.torch import ModelMetadata, is_atomistic_model
from omegaconf import OmegaConf

from ..utils.io import check_file_extension, load_model
Expand Down
10 changes: 8 additions & 2 deletions src/metatrain/cli/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@
from .export import _has_extensions
from .formatter import CustomHelpFormatter


def _add_train_model_parser(subparser: argparse._SubParsersAction) -> None:
"""Add `train_model` paramaters to an argparse (sub)-parser."""

Expand Down Expand Up @@ -509,9 +508,16 @@ def train_model(
mts_atomistic_model.buffers(),
)
).device
# CHANGE: metatensor does not yet support saving noncontiguous tensors (TEST)
# try:
# mts_atomistic_model.module.additive_models[0].weights['mtt::dos'] = mt.make_contiguous(mts_atomistic_model.module.additive_models[0].weights['mtt::dos'])
# except:
# print ("Failed to make DOS additive model contiguous, the target probably does not exist")


mts_atomistic_model.save(str(output_checked), collect_extensions=extensions_path)
# the model is first saved and then reloaded 1) for good practice and 2) because
# MetatensorAtomisticModel only torchscripts (makes faster) during save()
# AtomisticModel only torchscripts (makes faster) during save()

# Copy the exported model and the checkpoint also to the checkpoint directory
checkpoint_path = Path(checkpoint_dir)
Expand Down
8 changes: 4 additions & 4 deletions src/metatrain/deprecated/pet/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
import metatensor.torch
import torch
from metatensor.torch import Labels, TensorBlock, TensorMap
from metatensor.torch.atomistic import (
MetatensorAtomisticModel,
from metatomic.torch import (
AtomisticModel,
ModelCapabilities,
ModelMetadata,
ModelOutput,
Expand Down Expand Up @@ -274,7 +274,7 @@ def load_checkpoint(

def export(
self, metadata: Optional[ModelMetadata] = None
) -> MetatensorAtomisticModel:
) -> AtomisticModel:
dtype = next(self.parameters()).dtype
if dtype not in self.__supported_dtypes__:
raise ValueError(f"Unsupported dtype {self.dtype} for PET")
Expand Down Expand Up @@ -313,4 +313,4 @@ def export(

append_metadata_references(metadata, self.__default_metadata__)

return MetatensorAtomisticModel(self.eval(), metadata, capabilities)
return AtomisticModel(self.eval(), metadata, capabilities)
2 changes: 1 addition & 1 deletion src/metatrain/deprecated/pet/tests/test_exported.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import pytest
import torch
from metatensor.torch.atomistic import (
from metatomic.torch import (
ModelCapabilities,
ModelEvaluationOptions,
ModelMetadata,
Expand Down
10 changes: 5 additions & 5 deletions src/metatrain/deprecated/pet/tests/test_functionality.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
import torch
from jsonschema.exceptions import ValidationError
from metatensor.torch import Labels
from metatensor.torch.atomistic import (
MetatensorAtomisticModel,
from metatomic.torch import (
AtomisticModel,
ModelCapabilities,
ModelEvaluationOptions,
ModelMetadata,
Expand Down Expand Up @@ -105,7 +105,7 @@ def test_prediction():
supported_devices=["cpu", "cuda"],
)

model = MetatensorAtomisticModel(model.eval(), ModelMetadata(), capabilities)
model = AtomisticModel(model.eval(), ModelMetadata(), capabilities)
model(
[system],
evaluation_options,
Expand Down Expand Up @@ -157,7 +157,7 @@ def test_per_atom_predictions_functionality():
supported_devices=["cpu", "cuda"],
)

model = MetatensorAtomisticModel(model.eval(), ModelMetadata(), capabilities)
model = AtomisticModel(model.eval(), ModelMetadata(), capabilities)
model(
[system],
evaluation_options,
Expand Down Expand Up @@ -219,7 +219,7 @@ def test_selected_atoms_functionality():
selected_atoms=selected_atoms,
)

model = MetatensorAtomisticModel(model.eval(), ModelMetadata(), capabilities)
model = AtomisticModel(model.eval(), ModelMetadata(), capabilities)
model(
[system],
evaluation_options,
Expand Down
Loading