Skip to content
This repository has been archived by the owner on Jun 26, 2021. It is now read-only.

[FeatureRequest]Switch Parallel Training in PyTorch to distributed #179

Open
justusschock opened this issue Aug 3, 2019 · 0 comments
Open
Labels
new feature Something that should be added

Comments

@justusschock
Copy link
Member

justusschock commented Aug 3, 2019

DistributedDataParallel is reported to be faster for most networks compared to DataParallel (Up to 30%)

@justusschock justusschock added the new feature Something that should be added label Aug 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
new feature Something that should be added
Projects
None yet
Development

No branches or pull requests

1 participant