Skip to content

补充GaussianNLLLoss中文文档。;test=docs_preview #5623

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Apr 13, 2023

Conversation

Atlantisming
Copy link
Contributor

@Atlantisming Atlantisming commented Feb 22, 2023

代码及英文文档链接:PaddlePaddle/Paddle#50843
rfc文档链接:PaddlePaddle/community#446

Copy link
Collaborator

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

整体错误较多,请仔细参考 api文档写作规范!! 再进行修改

@@ -0,0 +1,40 @@
.. _cn_api_paddle_nn_GaussianNLLLoss:

SmoothL1Loss
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SmoothL1Loss?是不是写错了

参数
::::::::::

- **full** (bool) - 是否在损失计算中包括常数项。默认情况下为 False。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 英文这边参数都有“可选”,即(bool,可选),中英文请保持一致。 下面的参数也同样注意
  • 另外,请增加默认值为false代表什么意思,即 默认情况下为 False,代表...

::::::::::

- **full** (bool) - 是否在损失计算中包括常数项。默认情况下为 False。
- **epsilon** (float) - 用于限制 variance 的值,使其不会导致除 0 的出现。默认值为 1e-6
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

增加句号


- **full** (bool) - 是否在损失计算中包括常数项。默认情况下为 False。
- **epsilon** (float) - 用于限制 variance 的值,使其不会导致除 0 的出现。默认值为 1e-6
- **reduction** (str,可选) - 指定应用于输出结果的计算方式,可选值有 ``none``、``mean`` 和 ``sum``。默认为 ``mean``,计算 ``mini-batch`` loss 均值。设置为 `sum` 时,计算 `mini-batch` loss 的总和。设置为 ``none`` 时,则返回 loss Tensor。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

该行最后的 mini-batch 用双引号

- **reduction** (str,可选) - 指定应用于输出结果的计算方式,可选值有 ``none``、``mean`` 和 ``sum``。默认为 ``mean``,计算 ``mini-batch`` loss 均值。设置为 `sum` 时,计算 `mini-batch` loss 的总和。设置为 ``none`` 时,则返回 loss Tensor。
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None。

输入
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

输入?应该是形状吧


- **input** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`[N, *]`,其中 :math:`*` 表示任何数量的额外维度。
- **label** (Tensor):输入 :attr:`Tensor`, 形状、数据类型和 :attr:`input` 相同。
- **variance** (Tensor): 输入 :attr:`Tensor`,形状和 :attr:`input` 相同。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这部分描述好像和英文没有一致?请再检查一遍,务必做到中英文描述是一致的

-------------------------------
.. py:function:: paddle.nn.functional.gaussian_nll_loss(input, label, variance, full=False, epsilon=1e-6, reduction='mean', name=None)

返回 `gaussian negative log likelihood loss`。可在 :ref:`_cn_api_paddle_nn_GaussianNLLLoss` 查看详情。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:ref:_cn_api_paddle_nn_GaussianNLLLoss 引用方式不对, _cn_api_paddle_nn_GaussianNLLLoss 最前面的下划线应该去掉

参考: https://github.com/PaddlePaddle/docs/wiki/飞桨文档相互引用

@@ -0,0 +1,26 @@
.. _cn_api_nn_functional_nll_loss:

nll_loss
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

api名字也写错了吧?,应该是 gaussian_nll_loss

@@ -0,0 +1,26 @@
.. _cn_api_nn_functional_nll_loss:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会造成误解,改成.. _cn_api_nn_functional_gaussian_nll_loss:

代码示例
:::::::::

COPY-FROM: paddle.nn.functional.nll_loss
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy from也错了,应该是copy的是 gaussian_nll_loss 的代码

Copy link
Collaborator

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

除comment里提到的问题之外,还有两个内容需要补充


返回
:::::::::
`Tensor`,返回存储表示 `gaussian negative log likelihood loss` 的损失值。数据类型与:attr:`input`相同。当 reduction 为:attr:`none`时,形状与:attr:`input`相同。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • :attr:input 左右增加空格,该行其他地方也有相同的错误
    image

  • 注意中英文间增加空格,否则代码风格的ci会过不了,尽量用 pre commmit工具自动检查下

.. py:class:: paddle.nn.GaussianNLLLoss(full=False, epsilon=1e-6, reduction='mean', name=None)

计算输入 :attr:`input` 和标签 :attr:`label`、 :attr:`variance` 间的 GaussianNLL 损失,
:attr:`label` 被视为高斯分布的样本,其期望和方差由神经网络预测给出。对于一个 :attr:`label` 张量建模为具有高斯分布的张量的期望值 :attr:`input` 和张量的正方差 :attr:`var`,数学计算公式如下:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

总感觉这段话读起来不太通顺..尤其是最后一句 对于一个 label 张量建模为具有高斯分布的张量的期望值 input 和张量的正方差 var ,建议从头开始的整段描述再斟酌、优化一下

\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var},
\ \text{epsilon}\right)\right) + \frac{\left(\text{input} - \text{label}\right)^2}
{\text{max}\left(\text{var}, \ \text{epsilon}\right)}\right) + \text{const.}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

在英文文档中还有如下描述,在中文文档中确实。按照中英文文档严格一致的要求,请添加相关描述~或者说清楚不一致的原因
image

::::::::::

- **input** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,其中 :math:`*` 表示任何数量的额外维度。数据类型为 float32 或 float64。
- **label** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,形状与 :attr:`input` 相同,或者其中一维的大小为 1,这时会进行 broadcast 操作。数据类型为 float32 或 float64。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“或者”后面的建议描述再清楚些,如 或者形状与input相同但....


- **input** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,其中 :math:`*` 表示任何数量的额外维度。数据类型为 float32 或 float64。
- **label** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,形状与 :attr:`input` 相同,或者其中一维的大小为 1,这时会进行 broadcast 操作。数据类型为 float32 或 float64。
- **variance** (Tensor): 输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,形状与 :attr:`input` 相同,或其中一维的大小为 1,或缺少一维,这时会进行 broadcast 操作。数据类型为 float32 或 float64。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 同样的,“或者”后面的描述建议清晰些
  • 或缺少一维 这句话再斟酌一下,会造成误解
  • 英文文档还有 output的形状描述,请统一

参数
:::::::::
- **input** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,其中 :math:`*` 表示任何数量的额外维度。将被拟合成为高斯分布。数据类型为 float32 或 float64。
- **label** (Tensor):输入 :attr:`Tensor`,其形状为 :math:`(N, *)` 或者 :math:`(*)`,形状、数据类型和 :attr:`input` 相同,或者其中一维的大小为 1,这时会进行 broadcast 操作。为服从高斯分布的样本。数据类型为 float32 或 float64。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同样的,按上述要求对“或者”后的描述更清晰化一些

Copy link
Collaborator

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

内容写的很好,还有一些写作typo需要修改一下~

@@ -258,6 +258,7 @@ Loss 层
" :ref:`paddle.nn.MarginRankingLoss <cn_api_nn_loss_MarginRankingLoss>` ", "MarginRankingLoss 层"
" :ref:`paddle.nn.MSELoss <cn_api_paddle_nn_MSELoss>` ", "均方差误差损失层"
" :ref:`paddle.nn.NLLLoss <cn_api_nn_loss_NLLLoss>` ", "NLLLoss 层"
" :ref:`paddle.nn.GaussianNLLLoss <cn_api_nn_loss_GaussianNLLLoss>` ", "GaussianNLLLoss 层"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

标签应当和 api/label 及 相应api文档第一行标签 一致

  • 建议改为 cn_api_paddle_nn_GaussianNLLLoss

@@ -0,0 +1,44 @@
.. _cn_api_nn_GaussianNLLLoss:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

标签应当和 api/label 一致

  • 建议改为 _cn_api_paddle_nn_GaussianNLLLoss

.. py:class:: paddle.nn.GaussianNLLLoss(full=False, epsilon=1e-6, reduction='mean', name=None)

该接口创建一个 GaussianNLLLoss 实例,计算输入 :attr:`input` 和标签 :attr:`label`、 :attr:`variance` 间的 GaussianNLL 损失,
:attr:`label` 被视为服从高斯分布的样本,其期望:attr:`input`和方差:attr:`variance`由神经网络预测给出。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请注意 attr 的写作规范,左右需要增加空格
image

.. py:function:: paddle.nn.functional.gaussian_nll_loss(input, label, variance, full=False, epsilon=1e-6, reduction='mean', name=None)

计算输入 :attr:`input` 、:attr:`variance` 和标签 :attr:`label` 间的 GaussianNLL 损失,
:attr:`label` 被视为高斯分布的样本,其期望:attr:`input`和方差:attr:`variance`由神经网络预测给出。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理,注意 attr 的写作规范,左右需要加空格
image

Copy link
Collaborator

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请修改,注意细节!

.. py:class:: paddle.nn.GaussianNLLLoss(full=False, epsilon=1e-6, reduction='mean', name=None)

该接口创建一个 GaussianNLLLoss 实例,计算输入 :attr:`input` 和标签 :attr:`label`、 :attr:`variance` 间的 GaussianNLL 损失,
:attr:`label` 被视为服从高斯分布的样本,期望 :attr:`input` 和 方差:attr:`variance` 由神经网络预测给出。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:attr:variance 左边加空格
image

@@ -481,6 +482,7 @@ Embedding 相关函数
" :ref:`paddle.nn.functional.margin_ranking_loss <cn_api_nn_cn_margin_ranking_loss>` ", "用于计算 margin rank loss 损失"
" :ref:`paddle.nn.functional.mse_loss <cn_paddle_nn_functional_mse_loss>` ", "用于计算均方差误差"
" :ref:`paddle.nn.functional.nll_loss <cn_api_nn_functional_nll_loss>` ", "用于计算 nll 损失"
" :ref:`paddle.nn.functional.gaussian_nll_loss <n_api_paddle_nn_functional_gaussian_nll_loss>` ", "用于计算 gaussiannll 损失"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n_api_paddle_nn_functional_gaussian_nll_loss 最前面少了个c吧?

@@ -258,6 +258,7 @@ Loss 层
" :ref:`paddle.nn.MarginRankingLoss <cn_api_nn_loss_MarginRankingLoss>` ", "MarginRankingLoss 层"
" :ref:`paddle.nn.MSELoss <cn_api_paddle_nn_MSELoss>` ", "均方差误差损失层"
" :ref:`paddle.nn.NLLLoss <cn_api_nn_loss_NLLLoss>` ", "NLLLoss 层"
" :ref:`paddle.nn.GaussianNLLLoss <cn_api_paddle_nn_loss_GaussianNLLLoss>` ", "GaussianNLLLoss 层"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

相应文档的rst第一行 标签是 cn_api_paddle_nn_GaussianNLLLoss。中间没有loss,请修改

@sunzhongkai588
Copy link
Collaborator

@Atlantisming 文档没有其他问题了,但该分支有个conflict需要解决下,初步看是因为之前有其他开发者新增了一个loss API,在overview里加了一行,刚好和你冲突了。 拉取最新的代码解决即可~
image

@luotao1 luotao1 merged commit a63f288 into PaddlePaddle:develop Apr 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants