Skip to content

Fix global gather and global scatter operators #36517

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 20, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions paddle/fluid/operators/collective/global_scatter_op.cu.cc
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ class GlobalScatterOpCUDAKernel : public framework::OpKernel<T> {
if (platform::is_cpu_place(local_count->place())) {
cpu_local_count_data = local_count->data<int64_t>();
} else {
framework::TensorCopy(*local_count, platform::CPUPlace(),
&cpu_local_count);
framework::TensorCopySync(*local_count, platform::CPUPlace(),
&cpu_local_count);
cpu_local_count_data = cpu_local_count.data<int64_t>();
}
auto global_count_len = 0;
Expand All @@ -57,8 +57,8 @@ class GlobalScatterOpCUDAKernel : public framework::OpKernel<T> {
cpu_global_count_data = global_count->data<int64_t>();
global_count_len = global_count->numel();
} else {
framework::TensorCopy(*global_count, platform::CPUPlace(),
&cpu_global_count);
framework::TensorCopySync(*global_count, platform::CPUPlace(),
&cpu_global_count);
cpu_global_count_data = cpu_global_count.data<int64_t>();
global_count_len = cpu_global_count.numel();
}
Expand Down
20 changes: 7 additions & 13 deletions python/paddle/distributed/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,11 @@ def global_scatter(x,
to global_count.

Args:
x (Tensor): Tensor. Every element in the list must be a Tensor whose data type
should be float16, float32, float64, int32 or int64.
x (Tensor): Tensor. The tensor data type should be float16, float32, float64, int32 or int64.
local_count (Tensor): Tensor which have n_expert * world_size elements that indicates
how many data needed to be sent. Every element in the list must be a Tensor whose
data type should be int64.
how many data needed to be sent. The tensor data type should be int64.
global_count (Tensor): Tensor which have n_expert * world_size elements that indicates
how many data needed to be received. Every element in the list must be a Tensor whose
data type should be int64.
how many data needed to be received. The tensor data type should be int64.
group (Group, optional): The group instance return by new_group or None for global default group. Default: None.
use_calc_stream (bool, optional): Wether to use calculation stream (True) or communication stream. Default: True.

Expand Down Expand Up @@ -161,19 +158,16 @@ def global_gather(x,
to global_count.

Args:
x (Tensor): Tensor. Every element in the list must be a Tensor whose data type
should be float16, float32, float64, int32 or int64.
x (Tensor): Tensor. Tensor whose data type should be float16, float32, float64, int32 or int64.
local_count (Tensor): Tensor which have n_expert * world_size elements that indicates
how many data needed to be received. Every element in the list must be a Tensor whose
data type should be int64.
how many data needed to be received. Tensor data type should be int64.
global_count (Tensor): Tensor which have n_expert * world_size elements that indicates
how many data needed to be sent. Every element in the list must be a Tensor whose
data type should be int64.
how many data needed to be sent. Tensor data type should be int64.
group (Group, optional): The group instance return by new_group or None for global default group. Default: None.
use_calc_stream (bool, optional): Wether to use calculation stream (True) or communication stream. Default: True.

Returns:
None.
out (Tensor): The data received from all experts.

Examples:
.. code-block:: python
Expand Down