Skip to content

Commit d60a9ce

Browse files
映射文档 No. 26 (PaddlePaddle#5799)
* add the api mapping NO 26 after fixed * fix some error of mapping api NO 26 * fix some error of mapping api NO 26 * fix some error of mapping api NO 26 * update API mapping NO 26 * update API mapping NO 26 softmax
1 parent 49a8c11 commit d60a9ce

File tree

9 files changed

+201
-0
lines changed

9 files changed

+201
-0
lines changed
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## [ 仅参数名不一致 ] torch.distributions.kl.register_kl
2+
3+
### [torch.distributions.kl.register_kl](https://pytorch.org/docs/1.13/distributions.html?highlight=register_kl#torch.distributions.kl.register_kl)
4+
5+
```python
6+
torch.distributions.kl.register_kl(type_p, type_q)
7+
```
8+
9+
### [paddle.distribution.register_kl](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/distribution/register_kl_cn.html)
10+
11+
```python
12+
paddle.distribution.register_kl(cls_p, cls_q)
13+
```
14+
15+
两者功能一致且参数用法一致,仅参数名不一致,具体如下:
16+
### 参数映射
17+
| PyTorch | PaddlePaddle | 备注 |
18+
| ------------- | ------------ | ------------------------------------------------------ |
19+
| type_p | cls_p | 实例 p 的分布类型,继承于 Distribution 基类,仅参数名不一致。 |
20+
| type_q | cls_q | 实例 q 的分布类型,继承于 Distribution 基类,仅参数名不一致。 |
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## [ 仅参数名不一致 ] torch.fft.fftshift
2+
3+
### [torch.fft.fftshift](https://pytorch.org/docs/1.13/generated/torch.fft.fftshift.html#torch.fft.fftshift)
4+
5+
```python
6+
torch.fft.fftshift(input, dim=None)
7+
```
8+
9+
### [paddle.fft.fftshift](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fft/fftshift_cn.html)
10+
11+
```python
12+
paddle.fft.fftshift(x, axes=None, name=None)
13+
```
14+
15+
两者功能一致且参数用法一致,仅参数名不一致,具体如下:
16+
### 参数映射
17+
| PyTorch | PaddlePaddle | 备注 |
18+
| ------------- | ------------ | ------------------------------------------------------ |
19+
| input | x | 输入 Tensor,仅参数名不一致。 |
20+
| dim | axes | 进行移动的轴,仅参数名不一致。 |
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## [ 仅参数名不一致 ] torch.fft.ifftshift
2+
3+
### [torch.fft.ifftshift](https://pytorch.org/docs/1.13/generated/torch.fft.ifftshift.html#torch.fft.ifftshift)
4+
5+
```python
6+
torch.fft.ifftshift(input, dim=None)
7+
```
8+
9+
### [paddle.fft.ifftshift](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/fft/ifftshift_cn.html)
10+
11+
```python
12+
paddle.fft.ifftshift(x, axes=None, name=None)
13+
```
14+
15+
两者功能一致且参数用法一致,仅参数名不一致,具体如下:
16+
### 参数映射
17+
| PyTorch | PaddlePaddle | 备注 |
18+
| ------------- | ------------ | ------------------------------------------------------ |
19+
| input | x | 输入 Tensor,仅参数名不一致。 |
20+
| dim | axes | 进行移动的轴,仅参数名不一致。 |
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
## [ 参数完全一致 ] torch.nn.init.calculate_gain
2+
3+
### [torch.nn.init.calculate_gain](https://pytorch.org/docs/1.13/nn.init.html?highlight=gain#torch.nn.init.calculate_gain)
4+
5+
```python
6+
torch.nn.init.calculate_gain(nonlinearity, param=None)
7+
```
8+
9+
### [paddle.nn.initializer.calculate_gain](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/initializer/calculate_gain_cn.html)
10+
11+
```python
12+
paddle.nn.initializer.calculate_gain(nonlinearity, param=None)
13+
```
14+
15+
两者参数和用法完全一致,具体如下:
16+
17+
### 参数映射
18+
| PyTorch | PaddlePaddle | 备注 |
19+
| ------------- | ------------ | ------------------------------------------------------ |
20+
| nonlinearity | nonlinearity | 非线性激活函数的名称,两者参数和用法完全一致。 |
21+
| param | param | 某些激活函数的参数,默认为 None,两者参数和用法完全一致。 |
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## [ 仅参数默认值不一致 ] torch.linalg.svd
2+
3+
### [torch.linalg.svd](https://pytorch.org/docs/1.13/generated/torch.linalg.svd.html?highlight=svd#torch.linalg.svd)
4+
5+
```python
6+
torch.linalg.svd(A, full_matrices=True)
7+
```
8+
9+
### [paddle.linalg.svd](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/linalg/svd_cn.html)
10+
11+
```python
12+
paddle.linalg.svd(x, full_matrics=False, name=None)
13+
```
14+
15+
两者仅参数默认值不一致,具体如下:
16+
### 参数映射
17+
| PyTorch | PaddlePaddle | 备注 |
18+
| ------------- | ------------ | ------------------------------------------------------ |
19+
| A | x | 输入 Tensor,仅参数名不一致。 |
20+
| full_matrices | full_matrics | 是否计算完整的 U 和 V 矩阵,Pytorch 为 True,Paddle 为 False。 |
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## [ 参数完全一致 ] torch.random.manual_seed
2+
3+
### [torch.random.manual_seed](https://pytorch.org/docs/1.13/random.html?highlight=torch+random+manual_seed#torch.random.manual_seed)
4+
5+
```python
6+
torch.random.manual_seed(seed)
7+
```
8+
9+
### [paddle.seed](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/seed_cn.html)
10+
11+
```python
12+
paddle.seed(seed)
13+
```
14+
15+
两者参数和用法完全一致,具体如下:
16+
17+
### 参数映射
18+
| PyTorch | PaddlePaddle | 备注 |
19+
| ------------- | ------------ | ------------------------------------------------------ |
20+
| seed | seed | 要设置的的随机种子,两者参数和用法完全一致。 |
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
## [ 仅参数名不一致 ] torch.sparse.addmm
2+
3+
### [torch.sparse.addmm](https://pytorch.org/docs/1.13/generated/torch.sparse.addmm.html?highlight=addmm#torch.sparse.addmm)
4+
5+
```python
6+
torch.sparse.addmm(mat, mat1, mat2, beta=1.0, alpha=1.0)
7+
```
8+
9+
### [paddle.sparse.admm](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/sparse/addmm_cn.html)
10+
11+
```python
12+
paddle.sparse.addmm(input, x, y, beta=1.0, alpha=1.0, name=None)
13+
```
14+
15+
两者功能一致,仅参数名不一致,具体如下:
16+
17+
### 参数映射
18+
19+
|PyTorch | PaddlePaddle | 备注 |
20+
|--------| ------------- | --------------------------------------------------------------------------------------|
21+
|mat | input| 输入 Tensor,仅参数名不一致。|
22+
|mat1 | x |输入 Tensor,仅参数名不一致。|
23+
|mat2|y| 输入 Tensor,仅参数名不一致。|
24+
|beta|beta| input 的系数,默认 1.0。两者完全一致|
25+
|alpha|alpha| x * y 的系数,默认 1.0。两者完全一致|
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
## [ 仅参数名不一致 ] torch.sparse.mm
2+
3+
### [torch.sparse.mm](https://pytorch.org/docs/1.13/generated/torch.sparse.mm.html?highlight=torch+sparse+mm#torch.sparse.mm)
4+
5+
```python
6+
torch.sparse.mm(input, mat2)
7+
```
8+
9+
### [paddle.sparse.matmul](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/sparse/matmul_cn.html)
10+
11+
```python
12+
paddle.sparse.matmul(x, y, name=None)
13+
```
14+
15+
两者功能一致,仅参数名不一致,具体如下:
16+
17+
### 参数映射
18+
19+
|PyTorch | PaddlePaddle | 备注|
20+
|--------| ------------- | ------|
21+
|input | x| 输入的 Tensor,仅参数名不一致。|
22+
|mat2 | y |输入的第二个 Tensor,仅参数名不一致。|
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
## [ torch 参数更多 ] torch.sparse.softmax
2+
3+
### [torch.sparse.softmax](https://pytorch.org/docs/1.13/generated/torch.sparse.softmax.html#torch.sparse.softmax)
4+
5+
```python
6+
torch.sparse.softmax(input, dim, dtype=None)
7+
```
8+
9+
### [paddle.sparse.nn.functional.softmax](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/sparse/nn/functional/softmax_cn.html)
10+
11+
```python
12+
paddle.sparse.nn.functional.softmax(x, axis=-1, name=None)
13+
```
14+
15+
Pytorch 相比 Paddle 支持更多其他参数,具体如下:
16+
17+
### 参数映射
18+
19+
|PyTorch | PaddlePaddle | 备注 |
20+
|--------| -------------| --------------------------------------------------------------------------------------|
21+
|input |x | 输入的稀疏 Tensor,仅参数名不一致。|
22+
|dim | axis| 指定对输入 SparseTensor 计算 softmax 的轴,Paddle 的默认值:-1。仅参数名不一致。|
23+
|dtype | - | 指定数据类型,可选项,Pytorch 默认值为 None,Paddle 无此参数,需要转写。|
24+
### 转写示例
25+
#### dytpe:指定数据类型
26+
```Python
27+
# Pytorch 写法
28+
y = torch.sparse.softmax(x, dim=-1, dtype=torch.float32)
29+
30+
# Paddle 写法
31+
y = paddle.sparse.cast(x, value_dtype='float32')
32+
y = paddle.sparse.nn.functional.softmax(y)
33+
```

0 commit comments

Comments
 (0)