-
Notifications
You must be signed in to change notification settings - Fork 76
Compositional sampling diffusion #572
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Hi Jonas, this is great! One general design question that I would like to discuss is whether to add the new capabilities to the existing classes or inherit from the existing classes and add the new methods there , e.g., as in |
Where can I see examples of it's use and how it would alternatively look if the structure was different? |
So at the moment, the compositional part is only relevant during inference. You train a diffusion model, and then you can do the following:
This implementation is based on the compositional approach in here. Defining |
Thank you. That makes sense. Is there any practical use-case where we could want to use the same diffusion model for both standard and compositional sampling? |
I think the usual case is you know from the beginning that you want to do a compositional model. Only in rare cases, where you get new data after you trained a network, you might consider switching from diffusion to compositional diffusion. However, at the moment a |
So you would suggest a new diffusion model class but not a new approximator class. I would personally be fine with that. That said, is there anything else beyond the |
@arrjon Can you post a minimal interface example for the latest version (model definition and sampling) to discuss with the others? |
This pull request introduces compositional sampling support to the BayesFlow framework, enabling diffusion models to handle multiple compositional conditions efficiently. The main changes span the continuous approximator, diffusion model, and inference network modules, adding new methods and refactoring existing ones to support compositional structures in sampling, inference, and diffusion processes.
Larger changes include:
compositional_sample
method toContinuousApproximator
, which generates samples with compositional structure and handles flattening, reshaping, and prior score computation for multiple compositional conditions. Supporting internal method_compositional_sample
was also introduced.DiffusionModel
, implemented compositional diffusion support including:compositional_bridge
andcompositional_velocity
methods for compositional score calculation._compute_individual_scores
helper for handling multiple compositional conditions._inverse_compositional
method for inverse compositional diffusion sampling.The idea is that the workflow now has the method
compositional_sample
, which expects conditions in the form (n_datasets, n_conditions, ...). Then we can perform compositional sampling with diffusion models.compositional_sample
allows to set amini_batch_size
for memory efficient computation of the compositional score, which does not work withjax
backend however, asjax
does not like stochasticity in its integrators which cannot be precomputed. We could support here only fixed step sizes though?To compute the compositional score we need access to the score of the prior. Here we need to handle the adapter carefully so that we compute the correct score. In the current draft, I am not sure I computed the prior score correctly. Some ideas would be great, currently it fails for
jax
because the adpater is converting stuff tonumpy
back and forth, but fortorch
it is working.