Skip to content

Commit 27a49a6

Browse files
committed
Render usage.sh with sphinx-gallery
1 parent fd6d518 commit 27a49a6

File tree

5 files changed

+111
-103
lines changed

5 files changed

+111
-103
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -174,3 +174,4 @@ extensions/
174174
# sphinx gallery
175175
docs/src/examples
176176
*execution_times*
177+
qm9_reduced_100.zip

docs/generate_examples/conf.py

+5-1
Original file line numberDiff line numberDiff line change
@@ -11,22 +11,26 @@
1111
ROOT = os.path.realpath(os.path.join(HERE, "..", ".."))
1212

1313
sphinx_gallery_conf = {
14-
"filename_pattern": "/*",
14+
"filename_pattern": r"/*\.py",
1515
"copyfile_regex": r".*\.(pt|sh|xyz|yaml)",
16+
"ignore_pattern": r"train\.sh",
17+
"example_extensions": {".py", ".sh"},
1618
"default_thumb_file": os.path.join(ROOT, "docs/src/logo/metatrain-512.png"),
1719
"examples_dirs": [
1820
os.path.join(ROOT, "examples", "ase"),
1921
os.path.join(ROOT, "examples", "programmatic", "llpr"),
2022
os.path.join(ROOT, "examples", "zbl"),
2123
os.path.join(ROOT, "examples", "programmatic", "use_architectures_outside"),
2224
os.path.join(ROOT, "examples", "programmatic", "disk_dataset"),
25+
os.path.join(ROOT, "examples", "basic_usage"),
2326
],
2427
"gallery_dirs": [
2528
os.path.join(ROOT, "docs", "src", "examples", "ase"),
2629
os.path.join(ROOT, "docs", "src", "examples", "programmatic", "llpr"),
2730
os.path.join(ROOT, "docs", "src", "examples", "zbl"),
2831
os.path.join(ROOT, "docs", "src", "examples", "programmatic", "use_architectures_outside"),
2932
os.path.join(ROOT, "docs", "src", "examples", "programmatic", "disk_dataset"),
33+
os.path.join(ROOT, "docs", "src", "examples", "basic_usage"),
3034
],
3135
"min_reported_time": 5,
3236
"matplotlib_animations": True,

docs/src/getting-started/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This sections describes how to install the package, and its most basic commands.
77
:maxdepth: 1
88

99
installation
10-
usage
10+
../examples/basic_usage/usage
1111
custom_dataset_conf
1212
advanced_base_config
1313
override

docs/src/getting-started/usage.rst

-94
This file was deleted.

examples/basic_usage/usage.sh

+104-7
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,124 @@
1-
#!\bin\bash
1+
# .. _label_basic_usage:
2+
#
3+
# Basic Usage
4+
# ===========
5+
#
6+
# ``metatrain`` is designed for a direct usage from the command line (cli). The program
7+
# is registered via the abbreviation ``mtt`` to your command line. The general help of
8+
# ``metatrain`` can be accessed using
9+
#
10+
11+
mtt --help
12+
13+
# %%
14+
#
15+
# We now demonstrate how to ``train`` and ``evaluate`` a model from the command line.
16+
# For this example we use the :ref:`architecture-soap-bpnn` architecture and a subset of
17+
# the `QM9 dataset <https://paperswithcode.com/dataset/qm9>`_. You can obtain the
18+
# reduced dataset from our :download:`website <../../../static/qm9/qm9_reduced_100.xyz>`.
19+
#
20+
# Training
21+
# --------
22+
#
23+
# To train models, ``metatrain`` uses a dynamic override strategy for your training
24+
# options. We allow a dynamical composition and override of the default architecture
25+
# with either your custom ``options.yaml`` and even command line override grammar. For
26+
# reference and reproducibility purposes ``metatrain`` always writes the fully expanded,
27+
# including the overwritten option to ``options_restart.yaml``. The restart options file
28+
# is written into a subfolder named with the current *date* and *time* inside the
29+
# ``output`` directory of your current training run.
30+
#
31+
# The sub-command to start a model training is
32+
#
33+
# .. code-block:: bash
34+
#
35+
# mtt train
36+
#
37+
# To train a model you have to define your options. This includes the specific
38+
# architecture you want to use and the data including the training systems and target
39+
# values
40+
#
41+
# The default model and training hyperparameter for each model are listed in their
42+
# corresponding documentation page. We will use these minimal options to run an example
43+
# training using the default hyperparameters of an SOAP BPNN model
44+
#
45+
# .. literalinclude:: ../../../static/qm9/options.yaml
46+
# :language: yaml
47+
#
48+
# For each training run a new output directory in the format
49+
# ``outputs/YYYY-MM-DD/HH-MM-SS`` based on the current *date* and *time* is created. We
50+
# use this output directory to store checkpoints, the restart ``options_restart.yaml``
51+
# file as well as the log files. To start the training, create an ``options.yaml`` file
52+
# in the current directory and type
53+
254

355
mtt train options.yaml
456

57+
# %%
58+
#
559
# The functions saves the final model `model.pt` to the current output folder for later
6-
# evaluation. An `extensions/` folder, which contains the compiled extensions for the model,
7-
# might also be saved depending on the architecture.
8-
# All command line flags of the train sub-command can be listed via
60+
# evaluation. An `extensions/` folder, which contains the compiled extensions for the
61+
# model, might also be saved depending on the architecture. All command line flags of
62+
# the train sub-command can be listed via
63+
#
964

1065
mtt train --help
1166

67+
# %%
68+
#
69+
# After the training has finished, the ``mtt train`` command generates the
70+
# ``model.ckpt`` (final checkpoint) and ``model.pt`` (exported model) files in the
71+
# current directory, as well as in the ``output/YYYY-MM-DD/HH-MM-SS`` directory.
72+
#
73+
# Evaluation
74+
# ----------
75+
#
76+
# The sub-command to evaluate an already trained model is
77+
#
78+
# .. code-block:: bash
79+
#
80+
# mtt eval
81+
#
82+
# Besides the trained ``model``, you will also have to provide a file containing the
83+
# system and possible target values for evaluation. The system section of this
84+
# ``eval.yaml`` is exactly the same as for a dataset in the ``options.yaml`` file.
85+
#
86+
# .. literalinclude:: ../../../static/qm9/eval.yaml
87+
# :language: yaml
88+
#
89+
# Note that the ``targets`` section is optional. If the ``targets`` section is present,
90+
# the function will calculate and report RMSE values of the predictions with respect to
91+
# the real values as loaded from the ``targets`` section. You can run an evaluation by
92+
# typing
93+
#
1294
# We now evaluate the model on the training dataset, where the first arguments specifies
13-
# trained model and the second an option file containing the path of the dataset for evaulation.
14-
# The extensions of the model, if any, can be specified via the `-e` flag.
95+
# trained model and the second an option file containing the path of the dataset for
96+
# evaulation. The extensions of the model, if any, can be specified via the ``-e`` flag.
1597

1698
mtt eval model.pt eval.yaml -e extensions/
1799

100+
# %%
101+
#
18102
# The evaluation command predicts those properties the model was trained against; here
19-
# "U0". The predictions together with the systems have been written in a file named
103+
# ``"U0"``. The predictions together with the systems have been written in a file named
20104
# ``output.xyz`` in the current directory. The written file starts with the following
21105
# lines
22106

23107
head -n 20 output.xyz
24108

109+
# %%
110+
#
25111
# All command line flags of the eval sub-command can be listed via
26112

27113
mtt eval --help
114+
115+
# %%
116+
#
117+
# An important parameter of ``mtt eval`` is the ``-b`` (or ``--batch-size``) option,
118+
# which allows you to specify the batch size for the evaluation.
119+
#
120+
# Molecular simulations
121+
# ---------------------
122+
#
123+
# The trained model can also be used to run molecular simulations.
124+
# You can find how in the :ref:`tutorials` section.

0 commit comments

Comments
 (0)