You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
generated quantities is embarrassingly parallel over sampling iterations; but the current implementation of parallel compute in cmdstanr method function generate_quantities is limited to the same chains and threads_per_chain argument as is helpful in a reduce sum calculation.
Here's how I've been changing its use to employ as many available cores as available.
repeat posterior::split_chains on the fitted object until these equal to the number of cores wanted
in generate_quantities set parallel_chains = nchains(x) with threads_per_chain equal 1 (if used for reduce sum): e.g.,
# pull draws and split chains to = cores
x <- split_chains( f$draws() )
x <- split_chains( x )
# ... split until you have enough for each core
# uncomment generated quantities, recompile
# cpp_options aren't always needed, but to demonstrate when used
m <- cmdstan_model('fit.stan', cpp_options = list(stan_threads = TRUE))
# use the new draws object
q <- m$generate_quantities(
fitted_params = x,
data = dat,
parallel_chains = nchains(x),
threads_per_chain = 1)
Since cmdstanr already requires posterior, I thought it may be possible to bake in something like the above to make better use of parallel compute natively without manually splitting.
Otherwise, maybe this will serve as a tip. :)
The text was updated successfully, but these errors were encountered:
This is probably a better issue to raise against cmdstan itself, since changes to parallelism will be much more efficient at the c++ level than the R wrapper (plus will be more widely beneficial)
Note that this is morally true for a lot of use cases, but care must be taken when you want reproducibility.
The chain level parallelism provided by the current cmdstan is such that a given seed will always provide the same results for a given program regardless of the threading environment.
I believe for the generate_quantities method specifically, it is even currently the case that standalone GQ will produce the same results as if you had originally run the full model with this GQ block, if you use the same seed.
Alternatively, if this doesn't make it into CmdStan itself, another option could be to demonstrate this technique in a Stan case study or CmdStanR vignette. I can imagine people finding it useful.
generated quantities
is embarrassingly parallel over sampling iterations; but the current implementation of parallel compute incmdstanr
method functiongenerate_quantities
is limited to the same chains and threads_per_chain argument as is helpful in a reduce sum calculation.Here's how I've been changing its use to employ as many available cores as available.
posterior::split_chains
on the fitted object until these equal to the number of cores wantedgenerate_quantities
setparallel_chains = nchains(x)
with threads_per_chain equal 1 (if used for reduce sum): e.g.,Since
cmdstanr
already requiresposterior
, I thought it may be possible to bake in something like the above to make better use of parallel compute natively without manually splitting.Otherwise, maybe this will serve as a tip. :)
The text was updated successfully, but these errors were encountered: