Skip to content

Commit 7f05316

Browse files
committed
added comments
1 parent ce31bbd commit 7f05316

File tree

1 file changed

+3
-1
lines changed
  • py/torch_tensorrt/dynamo/conversion/impl/normalization

1 file changed

+3
-1
lines changed

py/torch_tensorrt/dynamo/conversion/impl/normalization/ops.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,9 @@ def batch_norm(
4848

4949
# Save the original output shape for later use
5050
output_shape = input.shape
51-
51+
# We perform constant folding for batch norm when the weight, bias, running_mean, and running_var are all tensors.
52+
# Batch norm operation can be fused into a single layer, which is more efficient than the original implementation.
53+
# In this way, the batch norm layer will be fused with the Convolution layer and get a performance boost.
5254
if all(
5355
[
5456
isinstance(weight, torch.Tensor),

0 commit comments

Comments
 (0)