Uneven batch sizes for MultiDatasetDataLoader #1445
-
|
I've been trying to use the MultiDatasetDataLoader to load in two datasets where I want the batch sizes for each dataset to be different. This is producing the following error:
This error arises after trying to do the following in the avalanche repository:
I managed to fix this problem by commenting out in torch/utils/data/_utils/collate.py: |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
|
Hi @H-aze, can you share a snippet of code we can use to reproduce the issue? |
Beta Was this translation helpful? Give feedback.
-
|
I think I solved it. Change buffer_data to this: from avalanche.benchmarks.utils.classification_dataset import make_classification_dataset
buffer_data = make_classification_dataset(
CIFAR10(
"data",
download=True,
train=True,
transform=ToTensor()
)
)Also, I noticed that in Let me know if this solves the issue! |
Beta Was this translation helpful? Give feedback.
I think I solved it. Change buffer_data to this:
Also, I noticed that in
after_training_expyou are callingself.storage_policy.update(strategy, **kwargs), but theupdatemethod is not implemented inFixedBuffer. Maybe you wanted to useupdate_from_dataset.Let me know if this solves the issue!