Skip to content

Commit fb9f287

Browse files
authored
Update docs (#7081)
* fix mapping * fix-docs * fix-docs * fix-docs * fix-docs * update docs * fix-bugs * update docs * update docs * update docs * update docs * update docs * update docs
1 parent 2e98b58 commit fb9f287

12 files changed

+233
-23
lines changed

docs/guides/model_convert/convert_from_pytorch/api_difference/Tensor/torch.Tensor.where.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -3,23 +3,24 @@
33
### [torch.Tensor.where](https://pytorch.org/docs/stable/generated/torch.Tensor.where.html#torch.Tensor.where)
44

55
```python
6-
torch.Tensor.where(condition, y)
6+
torch.Tensor.where(condition, other)
77
```
88

9-
### [paddle.Tensor.where](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/Tensor_cn.html#where-y-name-none)
9+
### [paddle.where](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/where_cn.html)
1010

1111
```python
12-
paddle.Tensor.where(x, y, name=None)
12+
paddle.where(condition, x=None, y=None, name=None)
1313
```
1414

15-
两者功能一致,参数名和参数用法不一致,具体如下:
15+
Pytorch 为 Tensor 类方法,Paddle 为普通函数,具体如下:
16+
1617
### 参数映射
18+
1719
| PyTorch | PaddlePaddle | 备注 |
1820
| ------------- | ------------ | ------------------------------------------------------ |
19-
| condition | - | condition 为判断条件。Paddle 无此参数,需要转写。|
20-
| - | x | 当 condition 为 true 时,选择 x 中元素。|
21-
| y | y | 当 condition 为 false 时,选择 y 中元素。|
22-
21+
| condition | condition | 判断条件。|
22+
| self | x | 当 condition 为 true 时,选择的元素,调用 torch.Tensor 类方法的 self Tensor 传入。|
23+
| other | y | 当 condition 为 false 时,选择的元素,仅参数名不一致。|
2324

2425
### 转写示例
2526

@@ -30,7 +31,5 @@ b = torch.tensor([2, 3, 0])
3031
c = a.where(a > 0, b)
3132

3233
# paddle 写法
33-
a = paddle.to_tensor([0, 1, 2])
34-
b = paddle.to_tensor([2, 3, 0])
35-
c = (a > 0).where(a, b)
34+
paddle.where(a > 0, a, b)
3635
```
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
## [ 返回参数类型不一致 ]torch.cuda.get_rng_state
2+
### [torch.cuda.get_rng_state](https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state.html#torch-cuda-get-rng-state)
3+
4+
```python
5+
torch.cuda.get_rng_state(device='cuda')
6+
```
7+
8+
### [paddle.get_cuda_rng_state](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/get_cuda_rng_state_cn.html#get-cuda-rng-state)
9+
10+
```python
11+
paddle.get_cuda_rng_state()
12+
```
13+
14+
torch 参数更多,并且 torch 与 paddle 的返回参数类型不一致,具体如下:
15+
16+
### 参数映射
17+
18+
| PyTorch | PaddlePaddle | 备注 |
19+
| ------------- | ------------ | ------------------------------------------------------ |
20+
| device | - | 返回 RNG 状态的设备,Paddle 无此参数,需要转写。 |
21+
| 返回值 | 返回值 | 返回参数类型不一致, PyTorch 返回 torch.ByteTensor,Paddle 返回 GeneratorState 对象列表。 |
22+
23+
### 转写示例
24+
25+
#### 返回参数类型不同
26+
27+
```python
28+
# PyTorch 写法,返回 torch.ByteTensor
29+
x = torch.cuda.get_rng_state(device='cuda:0')
30+
31+
# Paddle 写法,返回 GeneratorState 对象
32+
x = paddle.get_cuda_rng_state()[0]
33+
```
34+
35+
```python
36+
# PyTorch 写法,返回 torch.ByteTensor
37+
x = torch.cuda.get_rng_state()
38+
39+
# Paddle 写法,返回 GeneratorState 对象
40+
x = paddle.get_cuda_rng_state()[paddle.framework._current_expected_place().get_device_id()]
41+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
## [ 组合替代实现 ]torch.cuda.set_per_process_memory_fraction
2+
3+
### [torch.cuda.set_per_process_memory_fraction](https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html)
4+
5+
```python
6+
torch.cuda.set_per_process_memory_fraction(fraction, device=None)
7+
```
8+
9+
限制当前进程在指定 GPU 上最多能分配的显存比例,Paddle 无此 API,需要组合实现。
10+
11+
### 转写示例
12+
13+
```python
14+
# PyTorch 写法
15+
torch.cuda.set_per_process_memory_fraction(0.5)
16+
17+
# Paddle 写法
18+
os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = '0.5'
19+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
## [无参数]torch.distributed.is_available
2+
3+
### [torch.distributed.is_available](https://pytorch.org/docs/stable/distributed.html#torch.distributed.is_available)
4+
5+
```python
6+
torch.distributed.is_available()
7+
```
8+
9+
### [paddle.distributed.is_available](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/distributed/is_available_cn.html#cn-api-paddle-distributed-is-available)
10+
11+
```python
12+
paddle.distributed.is_available()
13+
```
14+
15+
两者功能一致,无参数。
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
## [无参数]torch.distributed.is_nccl_available
2+
3+
### [torch.distributed.is_nccl_available](https://pytorch.org/docs/stable/distributed.html#torch.distributed.is_nccl_available)
4+
5+
```python
6+
torch.distributed.is_nccl_available()
7+
```
8+
9+
### [paddle.core.is_compiled_with_nccl](https://github.com/PaddlePaddle/Paddle/blob/61de6003525166856157b6220205fe53df638376/python/paddle/jit/sot/utils/paddle_api_config.py#L159)
10+
11+
```python
12+
paddle.core.is_compiled_with_nccl()
13+
```
14+
15+
两者功能一致,无参数。
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
## [ torch 参数更多 ] torch.distributed.monitored_barrier
2+
### [torch.distributed.monitored_barrier](https://pytorch.org/docs/stable/distributed.html#torch.distributed.monitored_barrier)
3+
4+
```python
5+
torch.distributed.monitored_barrier(group=None, timeout=None, wait_all_ranks=False)
6+
```
7+
8+
### [paddle.distributed.barrier](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/distributed/barrier_cn.html)
9+
10+
```python
11+
paddle.distributed.barrier(group=None)
12+
```
13+
14+
PyTorch 相比 Paddle 支持更多其他参数,具体如下:
15+
16+
### 参数映射
17+
18+
| PyTorch | PaddlePaddle | 备注 |
19+
| ------------- | ------------ | ------------------------------------------------------|
20+
| group | group | 进程组编号。 |
21+
| timeout | - | 超时时间,Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。 |
22+
| wait_all_ranks | - | 是否等待所有进程超时后才报错,Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。 |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
## [ torch 参数更多 ]torch.nn.modules.module.register_module_forward_hook
2+
### [torch.nn.modules.module.register_module_forward_hook](https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html)
3+
4+
```python
5+
torch.nn.modules.module.register_module_forward_hook(hook, *, prepend=False, with_kwargs=False, always_call=False)
6+
```
7+
8+
### [paddle.nn.Layer.register_forward_post_hook](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/nn/Layer_cn.html#register-forward-post-hook-hook)
9+
10+
```python
11+
paddle.nn.Layer.register_forward_post_hook(hook)
12+
```
13+
14+
PyTorch 相比 Paddle 支持更多其他参数,具体如下:
15+
### 参数映射
16+
17+
| PyTorch | PaddlePaddle | 备注 |
18+
| ------------- | ------------ | ------------------------------------------------------ |
19+
| hook | hook | 被注册为 forward pre-hook 的函数。 |
20+
| prepend | - | 钩子执行顺序控制,Paddle 无此参数,暂无转写方式。 |
21+
| with_kwargs | - | 是否传递关键字参数,Paddle 无此参数,暂无转写方式。 |
22+
| always_call | - | 是否强制调用钩子,Paddle 无此参数,暂无转写方式。 |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
## [ 参数完全一致 ]torch.nn.modules.module.register_module_forward_pre_hook
2+
3+
### [torch.nn.modules.module.register_module_forward_pre_hook](https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html)
4+
5+
```python
6+
torch.nn.modules.module.register_module_forward_pre_hook(hook)
7+
```
8+
9+
### [paddle.nn.Layer.register_forward_pre_hook](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/nn/Layer_cn.html#register-forward-pre-hook-hook)
10+
11+
```python
12+
paddle.nn.Layer.register_forward_pre_hook(hook)
13+
```
14+
15+
功能一致,参数完全一致,具体如下:
16+
17+
### 参数映射
18+
19+
| PyTorch | PaddlePaddle | 备注 |
20+
|---------|--------------|-----------------------------------------------------------------------------------------------|
21+
| hook | hook | 被注册为 forward pre-hook 的函数。 |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
## [ 输入参数用法不一致 ]torch.optim.Optimizer.zero_grad
2+
3+
### [torch.optim.Optimizer.zero_grad](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html)
4+
5+
```python
6+
torch.optim.Optimizer.zero_grad(set_to_none=True)
7+
```
8+
9+
### [paddle.optimizer.Optimizer.clear_gradients](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/optimizer/Optimizer_cn.html#clear-grad)
10+
11+
```python
12+
paddle.optimizer.Optimizer.clear_gradients(set_to_zero=True)
13+
```
14+
15+
PyTorch 的 `Optimizer.zero_grad` 参数与 Paddle 的 `Optimizer.clear_gradients` 参数用法刚好相反,具体如下:
16+
17+
### 参数映射
18+
19+
| PyTorch | PaddlePaddle | 备注 |
20+
| ----------- | ------------ | ------------------------------------------------ |
21+
| set_to_none | set_to_zero | 设置如何清空梯度,PyTorch 默认 set_to_none 为 True,Paddle 默认 set_to_zero 为 True,两者功能刚好相反,Paddle 需设置为 False。 |
22+
23+
### 转写示例
24+
25+
```python
26+
# PyTorch 写法
27+
torch.optim.Optimizer.zero_grad(set_to_none=True)
28+
29+
# Paddle 写法
30+
paddle.optimizer.Optimizer.clear_gradients(set_to_zero=False)
31+
```
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
## [无参数]torch.get_default_device
2+
3+
### [torch.get_default_device](https://pytorch.org/docs/stable/generated/torch.get_default_device.html#torch-get-default-device)
4+
5+
```python
6+
torch.get_default_device()
7+
```
8+
9+
### [paddle.get_device](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/device/get_device_cn.html#get-device)
10+
11+
```python
12+
paddle.get_device()
13+
```
14+
15+
功能一致,无参数。
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## [ 输入参数类型不一致 ]torch.set_default_device
2+
3+
### [torch.set_default_device](https://pytorch.org/docs/stable/generated/torch.set_default_device.html#torch-set-default-device)
4+
5+
```python
6+
torch.set_default_device(device)
7+
```
8+
9+
### [paddle.set_device](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/device/set_device_cn.html#set-device)
10+
11+
```python
12+
paddle.set_device(device)
13+
```
14+
15+
功能一致,参数类型不一致,具体如下:
16+
### 参数映射
17+
18+
| PyTorch | PaddlePaddle | 备注 |
19+
| ------------- | ------------ |------------------------------------------------|
20+
| device | device | PyTorch 支持 torch.device 。PaddlePaddle 支持 str。 |

docs/guides/model_convert/convert_from_pytorch/pytorch_api_mapping_cn.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -370,6 +370,7 @@
370370
| ALIAS-REFERENCE-ITEM(`torch.utils.data.sampler.WeightedRandomSampler`, `torch.utils.data.WeightedRandomSampler`) |
371371
| ALIAS-REFERENCE-ITEM(`torch.igamma`, `torch.special.gammainc`) |
372372
| ALIAS-REFERENCE-ITEM(`torch.igammac`, `torch.special.gammaincc`) |
373+
| ALIAS-REFERENCE-ITEM(`torch.distributions.multivariate_normal.MultivariateNormal`, `torch.distributions.MultivariateNormal`) |
373374

374375
## <span id="id25">功能缺失的 API 列表</span>
375376

@@ -1034,22 +1035,15 @@
10341035
| NOT-IMPLEMENTED-ITEM(`torch.cuda.memory_usage`, https://pytorch.org/docs/stable/generated/torch.cuda.memory_usage.html#torch-cuda-memory-usage, 可新增,且框架底层有相关设计,成本低) |
10351036
| NOT-IMPLEMENTED-ITEM(`torch.layout`, https://pytorch.org/docs/stable/tensor_attributes.html#torch.layout, 可新增,但框架底层无相关设计,成本高) |
10361037
| NOT-IMPLEMENTED-ITEM(`torch.cuda.is_current_stream_capturing`, https://pytorch.org/docs/stable/generated/torch.cuda.is_current_stream_capturing.html#torch-cuda-is-current-stream-capturing, 可新增,且框架底层有相关设计,成本低) |
1038+
| NOT-IMPLEMENTED-ITEM(`torch.cuda.device_of`, https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html, 可新增,且框架底层有相关设计,成本低) |
10371039

10381040
## <span id="id26">映射关系开发中的 API 列表</span>
10391041

10401042
| 序号 | Pytorch 最新 release | Paddle develop | 映射关系分类 | 备注 |
10411043
| ----- | ----------- | ----------------- | ----------- | ------- |
10421044
| IN-DEVELOPMENT-PATTERN(`torch.nn.parameter.UninitializedParameter`, https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedParameter.html#torch.nn.parameter.UninitializedParameter) |
1043-
| IN-DEVELOPMENT-PATTERN(`torch.nn.modules.module.register_module_forward_pre_hook`, https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html#torch-nn-modules-module-register-module-forward-pre-hook) |
1044-
| IN-DEVELOPMENT-PATTERN(`torch.nn.modules.module.register_module_forward_hook`, https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html#torch-nn-modules-module-register-module-forward-hook) |
1045-
| IN-DEVELOPMENT-PATTERN(`torch.cuda.device_of`, https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html#torch.cuda.device_of) |
1046-
| IN-DEVELOPMENT-PATTERN(`torch.cuda.get_rng_state`, https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state.html#torch-cuda-get-rng-state) |
1047-
| IN-DEVELOPMENT-PATTERN(`torch.cuda.set_per_process_memory_fraction`, https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html#torch-cuda-set-per-process-memory-fraction) |
10481045
| IN-DEVELOPMENT-PATTERN(`torch.distributed.Backend`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.Backend) |
1049-
| IN-DEVELOPMENT-PATTERN(`torch.distributed.is_available`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.is_available) |
1050-
| IN-DEVELOPMENT-PATTERN(`torch.distributed.is_nccl_available`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.is_nccl_available) |
10511046
| IN-DEVELOPMENT-PATTERN(`torch.distributed.gather_object`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.gather_object) |
1052-
| IN-DEVELOPMENT-PATTERN(`torch.distributions.multivariate_normal.MultivariateNormal`, https://pytorch.org/docs/stable/distributions.html#torch.distributions.multivariate_normal.MultivariateNormal) |
10531047
| IN-DEVELOPMENT-PATTERN(`torch.jit.script`, https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch-jit-script) |
10541048
| IN-DEVELOPMENT-PATTERN(`torch.jit.trace`, https://pytorch.org/docs/stable/generated/torch.jit.trace.html#torch-jit-trace) |
10551049
| IN-DEVELOPMENT-PATTERN(`torch.jit.save`, https://pytorch.org/docs/stable/generated/torch.jit.save.html#torch-jit-save) |
@@ -1058,11 +1052,8 @@
10581052
| IN-DEVELOPMENT-PATTERN(`torch.utils.checkpoint.checkpoint_sequential`, https://pytorch.org/docs/stable/checkpoint.html#torch.utils.checkpoint.checkpoint_sequential) |
10591053
| IN-DEVELOPMENT-PATTERN(`torch.utils.tensorboard.writer.SummaryWriter`, https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter) |
10601054
| IN-DEVELOPMENT-PATTERN(`torch.nn.parameter.UninitializedBuffer`, https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedBuffer.html#torch.nn.parameter.UninitializedBuffer) |
1061-
| IN-DEVELOPMENT-PATTERN(`torch.optim.Optimizer.zero_grad`, https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html#torch-optim-optimizer-zero-grad) |
1062-
| IN-DEVELOPMENT-PATTERN(`torch.distributed.monitored_barrier`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.monitored_barrier) |
10631055
| IN-DEVELOPMENT-PATTERN(`torch.autograd.Function.jvp`, https://pytorch.org/docs/stable/generated/torch.autograd.Function.jvp.html#torch-autograd-function-jvp) |
10641056
| IN-DEVELOPMENT-PATTERN(`torch.memory_format`, https://pytorch.org/docs/stable/tensor_attributes.html#torch.memory_format) |
1065-
| IN-DEVELOPMENT-PATTERN(`torch.set_default_device`, https://pytorch.org/docs/stable/generated/torch.set_default_device.html#torch-set-default-device) |
10661057
| IN-DEVELOPMENT-PATTERN(`torch.concatenate`, https://pytorch.org/docs/stable/generated/torch.concatenate.html#torch-concatenate) |
10671058
| IN-DEVELOPMENT-PATTERN(`torch._foreach_abs`, https://pytorch.org/docs/stable/generated/torch._foreach_abs.html#torch-foreach-abs) |
10681059
| IN-DEVELOPMENT-PATTERN(`torch._foreach_abs_`, https://pytorch.org/docs/stable/generated/torch._foreach_abs_.html#torch-foreach-abs) |
@@ -1137,7 +1128,6 @@
11371128
| IN-DEVELOPMENT-PATTERN(`torch.distributed.reduce_scatter_tensor`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.reduce_scatter_tensor) |
11381129
| IN-DEVELOPMENT-PATTERN(`torch.distributed.all_to_all_single`, https://pytorch.org/docs/stable/distributed.html#torch.distributed.all_to_all_single) |
11391130
| IN-DEVELOPMENT-PATTERN(`torch.utils.set_module`, https://pytorch.org/docs/stable/generated/torch.utils.set_module.html#torch-utils-set-module) |
1140-
| IN-DEVELOPMENT-PATTERN(`torch.get_default_device`, https://pytorch.org/docs/stable/generated/torch.get_default_device.html#torch-get-default-device) |
11411131
| IN-DEVELOPMENT-PATTERN(`torch.nn.utils.fuse_conv_bn_eval`, https://pytorch.org/docs/stable/generated/torch.nn.utils.fuse_conv_bn_eval.html#torch-nn-utils-fuse-conv-bn-eval) |
11421132
| IN-DEVELOPMENT-PATTERN(`torch.nn.utils.fuse_conv_bn_weights`, https://pytorch.org/docs/stable/generated/torch.nn.utils.fuse_conv_bn_weights.html#torch-nn-utils-fuse-conv-bn-weights) |
11431133
| IN-DEVELOPMENT-PATTERN(`torch.nn.utils.fuse_linear_bn_eval`, https://pytorch.org/docs/stable/generated/torch.nn.utils.fuse_linear_bn_eval.html#torch-nn-utils-fuse-linear-bn-eval) |

0 commit comments

Comments
 (0)