Skip to content

[fluid_ops] collective_transpiler.py replace c_allreduce_sum #70774

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 31 commits into from
Closed
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
fb07465
Fix
co63oc Jan 10, 2025
1f80f60
Merge branch 'develop' into fix238
co63oc Jan 11, 2025
5e3deee
Fix
co63oc Jan 13, 2025
b1fec64
Fix
co63oc Jan 13, 2025
e1e4797
Merge branch 'develop' into fix238
co63oc Mar 20, 2025
d9eca5f
Merge branch 'develop' into fix238
co63oc Mar 26, 2025
0edb43e
[XPU] test: add performance and load test for xpu async load (#71869)
tizhou86 Mar 25, 2025
55ff10f
[XPU] support flashmask_attention (#71573)
runzhech Mar 25, 2025
651a59b
[Fix] use `py_compile` for pyi file syntax check (#71872)
megemini Mar 25, 2025
f763d3f
[fluid_ops] clean send_and_recv (#71860)
co63oc Mar 26, 2025
5985ccf
Revert "[CINN] Add SimplifyWithObviousGreaterThan (#71341)" (#71902)
zyfncg Mar 26, 2025
37151b9
use slice to implement r2s of split_axis (#71868)
liufengwei0103 Mar 26, 2025
163991c
revert PR71762 (#71886)
feixi21 Mar 26, 2025
ff5a8aa
fix clip_grad bug (#71895)
wanghuancoder Mar 26, 2025
3ba0095
[Dy2St][CINN] Explicit use phi backend in more CINN uts (#71896)
SigureMo Mar 26, 2025
164abc5
[XPU] support print runtime error log for bkcl (#71883)
cqulilujia Mar 26, 2025
5028d7c
maximum minimum support 0-size (#71829)
wanghuancoder Mar 26, 2025
d034f9c
[CINN] Enhance HorizontalFusion with shared input values (#71811)
huangjiyi Mar 26, 2025
05e7274
[Infra] Auto install `pybind11-stubgen` in cmake stage (#71911)
SigureMo Mar 26, 2025
c8f49c8
skip pd_op.nonozero for cache checking (#71870)
ooooo-create Mar 26, 2025
b345d08
Bind _record_stream for paddle.Tensor (#71917)
zhangbo9674 Mar 27, 2025
c05014f
[CINN] add trick for max(xxx,0) (#71926)
ooooo-create Mar 27, 2025
6d5255e
arg_min_max support big tensor (#71916)
wanghuancoder Mar 27, 2025
51ad5f5
[SOT] Use same list when create dynamic axes meta to ensure `Symbolic…
SigureMo Mar 27, 2025
d190026
Fix bugs for intranode EP kernels (#71745)
Hongqing-work Mar 27, 2025
9ab279d
[XPU] enable ut for 2 cards (#71564)
houj04 Mar 27, 2025
6e98bbb
[clean old comm]Remove dynamic_static_unified_comm
co63oc Mar 27, 2025
52b363f
add enable_nccl_dynamic_check/enable_bkcl_dynamic_check gflags (#71921)
dynamicheart Mar 27, 2025
9189898
[fluid_ops] Modify c_allreduce_sum in clip.py (#71918)
co63oc Mar 27, 2025
abfe639
[fluid_ops] clean pull_gpups_sparse (#71793)
co63oc Mar 27, 2025
0b45b2d
Merge branch 'develop' into fix238
co63oc Mar 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -670,9 +670,11 @@ def is_data_parallel_scale_op(op):


def is_data_parallel_reduce_op(op):
is_allreduce_op = op.type in [
"c_allreduce_sum",
"c_allreduce_avg",
is_allreduce_op = op.type == "all_reduce" and op.desc.attr(
"reduce_type"
) in [
dist.ReduceOp.SUM,
dist.ReduceOp.AVG,
]
is_reduce_op = op.type == "reduce" and op.desc.attr("reduce_type") in [
dist.ReduceOp.SUM,
Expand Down