Skip to content

Commit c0a1840

Browse files
committed
Merged in mikael/io (pull request #4)
Mikael/io
2 parents 9eb410f + 0bcf17b commit c0a1840

19 files changed

+510
-298
lines changed

docs/source/howtocite.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Please cite mpi4py-fft using
1010
year = {{2019}},
1111
title = {{Fast parallel multidimensional FFT using advanced MPI}},
1212
journal = {{Journal of Parallel and Distributed Computing}},
13-
volume = {{in press}}
13+
doi = {10.1016/j.jpdc.2019.02.006}
1414
}
1515
@electronic{mpi4py-fft,
1616
author = {{Lisandro Dalcin and Mikael Mortensen}},

docs/source/io.rst

Lines changed: 72 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Storing datafiles
44
mpi4py-fft works with regular Numpy arrays. However, since arrays in parallel
55
can become very large, and the arrays live on multiple processors, we require
66
parallel IO capabilities that goes beyond Numpys regular methods.
7-
In the :mod:`mpi4py_fft.utilities` module there are two helper classes for dumping
7+
In the :mod:`mpi4py_fft.io` module there are two helper classes for dumping
88
dataarrays to either `HDF5 <https://www.hdf5.org>`_ or
99
`NetCDF <https://www.unidata.ucar.edu/software/netcdf/>`_ format:
1010

@@ -17,56 +17,74 @@ reads data in parallel. A simple example of usage is::
1717
from mpi4py import MPI
1818
import numpy as np
1919
from mpi4py_fft import PFFT, HDF5File, NCFile, newDistArray
20-
2120
N = (128, 256, 512)
2221
T = PFFT(MPI.COMM_WORLD, N)
2322
u = newDistArray(T, forward_output=False)
2423
v = newDistArray(T, forward_output=False, val=2)
25-
u[:] = np.random.random(N)
26-
24+
u[:] = np.random.random(u.shape)
25+
# Store by first creating output files
2726
fields = {'u': [u], 'v': [v]}
28-
f0 = HDF5File('h5test.h5', T)
29-
f1 = NCFile('nctest.nc', T)
27+
f0 = HDF5File('h5test.h5', mode='w')
28+
f1 = NCFile('nctest.nc', mode='w')
3029
f0.write(0, fields)
3130
f1.write(0, fields)
3231
v[:] = 3
3332
f0.write(1, fields)
3433
f1.write(1, fields)
3534

36-
Note that we are creating two datafiles ``h5test.h5`` and ``nctest.nc``,
35+
Note that we are here creating two datafiles ``h5test.h5`` and ``nctest.nc``,
3736
for storing in HDF5 or NetCDF4 formats respectively. Normally, one would be
3837
satisfied using only one format, so this is only for illustration. We store
39-
the fields ``u`` and ``v`` using method ``write`` on two different occasions,
40-
so the datafiles will contain two snapshots of each field ``u`` and ``v``.
38+
the fields ``u`` and ``v`` on three different occasions,
39+
so the datafiles will contain three snapshots of each field ``u`` and ``v``.
40+
41+
Also note that an alternative and perhaps simpler approach is to just use
42+
the ``write`` method of each distributed array::
43+
44+
u.write('h5test.h5', 'u', step=2)
45+
v.write('h5test.h5', 'v', step=2)
46+
u.write('nctest.nc', 'u', step=2)
47+
v.write('nctest.nc', 'v', step=2)
4148

42-
The stored dataarrays can be retrieved later on::
49+
The two different approaches can be used on the same output files.
50+
51+
The stored dataarrays can also be retrieved later on::
4352

44-
f0 = HDF5File('h5test.h5', T, mode='r')
45-
f1 = NCFile('nctest.nc', T, mode='r')
4653
u0 = newDistArray(T, forward_output=False)
4754
u1 = newDistArray(T, forward_output=False)
48-
f0.read(u0, 'u', 0)
49-
f0.read(u1, 'u', 1)
50-
f1.read(u0, 'u', 0)
51-
f1.read(u1, 'u', 1)
55+
u0.read('h5test.h5', 'u', 0)
56+
u1.read('h5test.h5', 'u', 1)
57+
# or alternatively for netcdf
58+
#u0.read('nctest.nc', 'u', 0)
59+
#u1.read('nctest.nc', 'u', 1)
5260

5361
Note that one does not have to use the same number of processors when
5462
retrieving the data as when they were stored.
5563

5664
It is also possible to store only parts of the, potentially large, arrays.
57-
Any chosen slice may be stored, using a *global* view of the arrays::
65+
Any chosen slice may be stored, using a *global* view of the arrays. It is
66+
possible to store both complete fields and slices in one single call by
67+
using the following appraoch::
5868

59-
f2 = HDF5File('variousfields.h5', T, mode='w')
69+
f2 = HDF5File('variousfields.h5', mode='w')
6070
fields = {'u': [u,
6171
(u, [slice(None), slice(None), 4]),
6272
(u, [5, 5, slice(None)])],
6373
'v': [v,
6474
(v, [slice(None), 6, slice(None)])]}
6575
f2.write(0, fields)
6676
f2.write(1, fields)
67-
f2.write(2, fields)
6877

69-
This will lead to an hdf5-file with groups::
78+
Alternatively, one can use the write method of each field with the ``global_slice``
79+
keyword argument::
80+
81+
u.write('variousfields.h5', 'u', 2)
82+
u.write('variousfields.h5', 'u', 2, global_slice=[slice(None), slice(None), 4])
83+
u.write('variousfields.h5', 'u', 2, global_slice=[5, 5, slice(None)])
84+
v.write('variousfields.h5', 'v', 2)
85+
v.write('variousfields.h5', 'v', 2, global_slice=[slice(None), 6, slice(None)])
86+
87+
In the end this will lead to an hdf5-file with groups::
7088

7189
variousfields.h5/
7290
├─ u/
@@ -80,41 +98,49 @@ This will lead to an hdf5-file with groups::
8098
| | ├─ 0
8199
| | ├─ 1
82100
| | └─ 2
83-
| └─ 3D/
84-
| ├─ 0
85-
| ├─ 1
86-
| └─ 2
87-
├─ v/
88-
| ├─ 2D/
89-
| | └─ slice_6_slice/
90-
| | ├─ 0
91-
| | ├─ 1
92-
| | └─ 2
93-
| └─ 3D/
94-
| ├─ 0
95-
| ├─ 1
96-
| └─ 2
97-
└─ mesh/
98-
├─ x0
99-
├─ x1
100-
└─ x2
101-
102-
Note that a mesh is stored along with all the data. This mesh can be given in
103-
two different ways when creating the datafiles:
101+
| ├─ 3D/
102+
| | ├─ 0
103+
| | ├─ 1
104+
| | └─ 2
105+
| └─ mesh/
106+
| ├─ x0
107+
| ├─ x1
108+
| └─ x2
109+
└─ v/
110+
├─ 2D/
111+
| └─ slice_6_slice/
112+
| ├─ 0
113+
| ├─ 1
114+
| └─ 2
115+
├─ 3D/
116+
| ├─ 0
117+
| ├─ 1
118+
| └─ 2
119+
└─ mesh/
120+
├─ x0
121+
├─ x1
122+
└─ x2
123+
124+
Note that a mesh is stored along with each group of data. This mesh can be
125+
given in two different ways when creating the datafiles:
104126

105127
1) A sequence of 2-tuples, where each 2-tuple contains the (origin, length)
106128
of the domain along its dimension. For example, a uniform mesh that
107129
originates from the origin, with lengths :math:`\pi, 2\pi, 3\pi`, can be
108-
given as::
130+
given when creating the output file as::
131+
132+
f0 = HDF5File('filename.h5', domain=((0, pi), (0, 2*np.pi), (0, 3*np.pi)))
133+
134+
or, using the write method of the distributed array:
109135

110-
f0 = HDF5File('filename.h5', T, domain=((0, pi), (0, 2*np.pi), (0, 3*np.pi)))
136+
u.write('filename.h5', 'u', 0, domain=((0, pi), (0, 2*np.pi), (0, 3*np.pi)))
111137

112138
2) A sequence of arrays giving the coordinates for each dimension. For example::
113139

114140
d = (np.arange(N[0], dtype=np.float)*1*np.pi/N[0],
115141
np.arange(N[1], dtype=np.float)*2*np.pi/N[1],
116142
np.arange(N[2], dtype=np.float)*2*np.pi/N[2])
117-
f0 = HDF5File('filename.h5', T, domain=d)
143+
f0 = HDF5File('filename.h5', domain=d)
118144

119145
With NetCDF4 the layout is somewhat different. For ``variousfields`` above,
120146
if we were using :class:`.NCFile` instead of :class:`.HDF5File`,
@@ -147,9 +173,9 @@ opened with `Visit <https://www.visitusers.org>`_.
147173

148174
To view the HDF5-files we first need to generate some light-weight *xdmf*-files that can
149175
be understood by both Paraview and Visit. To generate such files, simply throw the
150-
module :mod:`.utilities.generate_xdmf` on the HDF5-files::
176+
module :mod:`.io.generate_xdmf` on the HDF5-files::
151177

152-
from mpi4py_fft.utilities import generate_xdmf
178+
from mpi4py_fft.io import generate_xdmf
153179
generate_xdmf('variousfields.h5')
154180

155181
This will create a number of xdmf-files, one for each group that contains 2D
Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,45 +1,45 @@
1-
mpi4py_fft.utilities package
1+
mpi4py_fft.io package
22
=============================
33

44
Submodules
55
----------
66

7-
mpi4py_fft.utilities.generate_xdmf module
7+
mpi4py_fft.io.generate_xdmf module
88
-----------------------------------------
99

10-
.. automodule:: mpi4py_fft.utilities.generate_xdmf
10+
.. automodule:: mpi4py_fft.io.generate_xdmf
1111
:members:
1212
:undoc-members:
1313
:show-inheritance:
1414

15-
mpi4py_fft.utilities.h5py_file module
15+
mpi4py_fft.io.h5py_file module
1616
-------------------------------------
1717

18-
.. automodule:: mpi4py_fft.utilities.h5py_file
18+
.. automodule:: mpi4py_fft.io.h5py_file
1919
:members:
2020
:undoc-members:
2121
:show-inheritance:
2222

23-
mpi4py_fft.utilities.nc_file module
23+
mpi4py_fft.io.nc_file module
2424
-----------------------------------
2525

26-
.. automodule:: mpi4py_fft.utilities.nc_file
26+
.. automodule:: mpi4py_fft.io.nc_file
2727
:members:
2828
:undoc-members:
2929
:show-inheritance:
3030

31-
mpi4py_fft.utilities.file_base module
31+
mpi4py_fft.io.file_base module
3232
-------------------------------------
3333

34-
.. automodule:: mpi4py_fft.utilities.file_base
34+
.. automodule:: mpi4py_fft.io.file_base
3535
:members:
3636
:undoc-members:
3737
:show-inheritance:
3838

3939
Module contents
4040
---------------
4141

42-
.. automodule:: mpi4py_fft.utilities
42+
.. automodule:: mpi4py_fft.io
4343
:members:
4444
:undoc-members:
4545
:show-inheritance:

docs/source/mpi4py_fft.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Subpackages
77
.. toctree::
88

99
mpi4py_fft.fftw
10-
mpi4py_fft.utilities
10+
mpi4py_fft.io
1111

1212

1313
Submodules

examples/darray.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
z[:] = MPI.COMM_WORLD.Get_rank()
5151
g0 = z.get((0, slice(None), 0))
5252
z2 = z.redistribute(2)
53-
z = z2.redistribute(darray=z)
53+
z = z2.redistribute(out=z)
5454
g1 = z.get((0, slice(None), 0))
5555
assert np.all(g0 == g1)
5656
s0 = MPI.COMM_WORLD.reduce(np.linalg.norm(z)**2)
@@ -69,14 +69,14 @@
6969
if MPI.COMM_WORLD.Get_rank() == 0:
7070
assert abs(s0-s1) < 1e-12
7171

72-
z1 = z0.redistribute(darray=z1)
73-
z0 = z1.redistribute(darray=z0)
72+
z1 = z0.redistribute(out=z1)
73+
z0 = z1.redistribute(out=z0)
7474

7575
N = (6, 6, 6, 6, 6)
7676
m0 = DistArray(N, dtype=float, alignment=2)
7777
m0[:] = MPI.COMM_WORLD.Get_rank()
7878
m1 = m0.redistribute(4)
79-
m0 = m1.redistribute(darray=m0)
79+
m0 = m1.redistribute(out=m0)
8080
s0 = MPI.COMM_WORLD.reduce(np.linalg.norm(m0)**2)
8181
s1 = MPI.COMM_WORLD.reduce(np.linalg.norm(m1)**2)
8282
if MPI.COMM_WORLD.Get_rank() == 0:

examples/transforms.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import functools
22
import numpy as np
33
from mpi4py import MPI
4-
from mpi4py_fft import PFFT, DistArray
4+
from mpi4py_fft import PFFT, newDistArray
55
from mpi4py_fft.fftw import dctn, idctn
66

77
# Set global size of the computational box
@@ -17,16 +17,16 @@
1717

1818
assert fft.axes == pfft.axes
1919

20-
u = DistArray(pfft=fft, forward_output=False)
20+
u = newDistArray(fft, forward_output=False)
2121
u[:] = np.random.random(u.shape).astype(u.dtype)
2222

23-
u_hat = DistArray(pfft=fft, forward_output=True)
23+
u_hat = newDistArray(fft, forward_output=True)
2424
u_hat = fft.forward(u, u_hat)
2525
uj = np.zeros_like(u)
2626
uj = fft.backward(u_hat, uj)
2727
assert np.allclose(uj, u)
2828

29-
u_padded = DistArray(pfft=pfft, forward_output=False)
29+
u_padded = newDistArray(pfft, forward_output=False)
3030
uc = u_hat.copy()
3131
u_padded = pfft.backward(u_hat, u_padded)
3232
u_hat = pfft.forward(u_padded, u_hat)

mpi4py_fft/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
For more information, see `documentation <https://mpi4py-fft.readthedocs.io>`_.
1717
1818
"""
19-
__version__ = '2.0.0'
19+
__version__ = '2.0.1'
2020
__author__ = 'Lisandro Dalcin and Mikael Mortensen'
2121

2222
from .distarray import DistArray, newDistArray, Function

0 commit comments

Comments
 (0)