Replies: 1 comment 1 reply
-
Hi @hw-ju , Yes, if you want to Thanks. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
In brats_training_ddp.py,
BratsCacheDataset
is implemented to "enhance the DecathlonDataset to support distributed data parallel" and it will "partition dataset based on current rank number, every rank trains with its own data. it can avoid duplicated caching content in each rank, but will not do global shuffle before every epoch".Can the script still work if we replace the
BratsCacheDataset
object with aCacheDataset
object and aDistributedSampler
object, to get a global shuffle before every epoch?If it can work, then in this way, will the whole intermediate transformed dataset be cached on each GPU memory?
Beta Was this translation helpful? Give feedback.
All reactions