Skip to content

Commit 5a5074f

Browse files
authored
Fix typo in pytorch-ddp-accelerate-transformers.md (#1436)
1 parent 0ce2a37 commit 5a5074f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

pytorch-ddp-accelerate-transformers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ The optimizer needs to be declared based on the model *on the specific device* (
173173
Lastly, to run the script PyTorch has a convenient `torchrun` command line module that can help. Just pass in the number of nodes it should use as well as the script to run and you are set:
174174

175175
```bash
176-
torchrun --nproc_per_nodes=2 --nnodes=1 example_script.py
176+
torchrun --nproc_per_node=2 --nnodes=1 example_script.py
177177
```
178178

179179
The above will run the training script on two GPUs that live on a single machine and this is the barebones for performing only distributed training with PyTorch.

0 commit comments

Comments
 (0)