This repository has been archived by the owner on Jun 26, 2021. It is now read-only.
[FeatureRequest]Switch Parallel Training in PyTorch to distributed #179
Labels
new feature
Something that should be added
DistributedDataParallel
is reported to be faster for most networks compared toDataParallel
(Up to 30%)The text was updated successfully, but these errors were encountered: