This project is used for training PVNet and running PVNet on live data.
PVNet is a multi-modal late-fusion model for predicting renewable energy generation from weather data. The NWP (Numerical Weather Prediction) and satellite data are sent through a neural network which encodes them down to 1D intermediate representations. These are concatenated together with recent generation, the calculated solar coordinates (azimuth and elevation) and the location ID which has been put through an embedding layer. This 1D concatenated feature vector is put through an output network which outputs predictions of the future energy yield.
Our paper based on this repo was accepted into the Tackling Climate Change with Machine Learning workshop at ICLR 2024 and can be viewed here.
Some more structured notes on experiments we have performed with PVNet are here.
git clone git@github.com:openclimatefix/PVNet.git
cd PVNet
pip install .
The commit history is extensive. To save download time, use a depth of 1:
git clone --depth 1 git@github.com:openclimatefix/PVNet.git
This means only the latest commit and its associated files will be downloaded.
Next, in the PVNet repo, install PVNet as an editable package:
pip install -e .
pip install ".[dev]"
Before running any code in PVNet, copy the example configuration to a configs directory:
cp -r configs.example configs
You will be making local amendments to these configs. See the README in
configs.example
for more info.
As a minimum, in order to create samples of data/run PVNet, you will need to supply paths to NWP and GSP data. PV data can also be used. We list some suggested locations for downloading such datasets below:
GSP (Grid Supply Point) - Regional PV generation data
The University of Sheffield provides API access to download this data:
https://www.solar.sheffield.ac.uk/api/
Documentation for querying generation data aggregated by GSP region can be found here: https://docs.google.com/document/d/e/2PACX-1vSDFb-6dJ2kIFZnsl-pBQvcH4inNQCA4lYL9cwo80bEHQeTK8fONLOgDf6Wm4ze_fxonqK3EVBVoAIz/pub#h.9d97iox3wzmd
NWP (Numerical weather predictions)
OCF maintains a Zarr formatted version of the German Weather Service's (DWD)
ICON-EU NWP model here:
https://huggingface.co/datasets/openclimatefix/dwd-icon-eu which includes the UK
PV
OCF maintains a dataset of PV generation from 1311 private PV installations
here: https://huggingface.co/datasets/openclimatefix/uk_pv
Outside the PVNet repo, clone the ocf-data-sampler repo and exit the conda env created for PVNet: https://github.com/openclimatefix/ocf-data-sampler
git clone git@github.com/openclimatefix/ocf-data-sampler.git
conda create -n ocf-data-sampler python=3.11
Then go inside the ocf-data-sampler repo to add packages
pip install .
Then exit this environment, and enter back into the pvnet conda environment and install ocf-data-sampler in editable mode (-e). This means the package is directly linked to the source code in the ocf-data-sampler repo.
pip install -e <PATH-TO-ocf-data-sampler-REPO>
If you install the local version of ocf-data-sampler
that is more recent than the version
specified in PVNet
it is not guarenteed to function properly with this library.
PVNet now trains and validates directly from streamed_samples (i.e. no pre-saving to disk).
Make sure you have copied example configs (as already stated above): cp -r configs.example configs
We will use the following example config file to describe your data sources: /PVNet/configs/datamodule/configuration/example_configuration.yaml
. Ensure that the file paths are set to the correct locations in example_configuration.yaml
: search for PLACEHOLDER
to find where to input the location of the files. Delete or comment the parts for data you are not using.
At run time, the datamodule config PVNet/configs/datamodule/streamed_samples.yaml
points to your chosen configuration file:
configuration: "/FULL-PATH-TO-REPO/PVNet/configs/datamodule/configuration/example_configuration.yaml"
You can also update train/val/test time ranges here to match the period you have access to.
If downloading private data from a GCP bucket make sure to authenticate gcloud (the public satellite data does not need authentication):
gcloud auth login
You can provide multiple storage locations as a list. For example:
satellite: zarr_path: - "gs://public-datasets-eumetsat-solar-forecasting/satellite/EUMETSAT/SEVIRI_RSS/v4/2020_nonhrv.zarr" - "gs://public-datasets-eumetsat-solar-forecasting/satellite/EUMETSAT/SEVIRI_RSS/v4/2021_nonhrv.zarr"
ocf-data-sampler
is currently set up to use 11 channels from the satellite data (the 12th, HRV, is not used).
How PVNet is run is determined by the configuration files. The example configs in PVNet/configs.example
work with streamed_samples using datamodule/streamed_samples.yaml
.
Update the following before training:
- In
configs/model/late_fusion.yaml
:- Update the list of encoders to match the data sources you are using. For different NWP sources, keep the same structure but ensure:
in_channels
: the number of variables your NWP source suppliesimage_size_pixels
: spatial crop matching your NWP resolution and the settings in your datamodule configuration (unless you coarsened, e.g. for ECMWF)
- Update the list of encoders to match the data sources you are using. For different NWP sources, keep the same structure but ensure:
- In
configs/trainer/default.yaml
:- Set
accelerator: 0
if running on a system without a supported GPU
- Set
- In
configs/datamodule/streamed_samples.yaml
:- Point
configuration:
to your localexample_configuration.yaml
(or your custom one) - Adjust the train/val/test time ranges to your available data
- Point
If you create custom config files, update the main ./configs/config.yaml
defaults:
defaults:
- trainer: default.yaml
- model: late_fusion.yaml
- datamodule: streamed_samples.yaml
- callbacks: null
- experiment: null
- hparams_search: null
- hydra: default.yaml
Now train PVNet:
python run.py
You can override any setting with Hydra, e.g.:
python run.py datamodule=streamed_samples datamodule.configuration="/FULL-PATH/PVNet/configs/datamodule/configuration/example_configuration.yaml"
If you have successfully trained a PVNet model and have a saved model checkpoint you can create a backtest using this, e.g. forecasts on historical data to evaluate forecast accuracy/skill. This can be done by running one of the scripts in this repo such as the UK GSP backtest script or the the pv site backtest script, further info on how to run these are in each backtest file.
You can use python -m pytest tests
to run tests
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!