Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .spelling
Original file line number Diff line number Diff line change
Expand Up @@ -876,6 +876,7 @@ microservice
microservices
middleboxes
middleware
milliseconds
minikube
MirageDebug
misconfiguration
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
30 changes: 30 additions & 0 deletions content/en/docs/ops/deployment/ambient-mc-perf/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title: Ambient Multicluster Performance
description: Ambient Multicluster performance and scalability summary.
weight: 30
keywords:
- performance
- scalability
- scale
- multicluster
owner: istio/wg-environments-maintainers
test: n/a
---

Multicluster deployments with Ambient mode enable you to offer truly globally resilient applications at scale with minimal overhead. In addition to its normal functions, the Istio control plane creates watches on all remote clusters to keep an up-to-date listing of what global services each cluster offers. The Istio dataplane can route traffic to these remote global services, either as a part of normal traffic distribution, or specifically when the local service is unavailable.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Multicluster deployments with Ambient mode enable you to offer truly globally resilient applications at scale with minimal overhead. In addition to its normal functions, the Istio control plane creates watches on all remote clusters to keep an up-to-date listing of what global services each cluster offers. The Istio dataplane can route traffic to these remote global services, either as a part of normal traffic distribution, or specifically when the local service is unavailable.
Multicluster deployments with ambient mode enable you to offer truly globally resilient applications at scale with minimal overhead. In addition to its normal functions, the Istio control plane creates watches on all remote clusters to keep an up-to-date listing of what global services each cluster offers. The Istio data plane can route traffic to these remote global services, either as a part of normal traffic distribution, or specifically when the local service is unavailable.


## Control plane performance

As documented [here](/docs/ops/deployment/performance-and-scalability), the Istio control plane generally scales as the product of deployment changes, configuration changes, and the number of connected proxies. Ambient Multi Cluster adds two new dimensions to the Control Plane scalability story: number of remote clusters, and number of remote services. This means that adding 10 remote services to the mesh has substantially lower impact on the control plane performance than adding 10 local services.
Copy link
Contributor

@keithmattix keithmattix Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As documented [here](/docs/ops/deployment/performance-and-scalability), the Istio control plane generally scales as the product of deployment changes, configuration changes, and the number of connected proxies. Ambient Multi Cluster adds two new dimensions to the Control Plane scalability story: number of remote clusters, and number of remote services. This means that adding 10 remote services to the mesh has substantially lower impact on the control plane performance than adding 10 local services.
As documented [here](/docs/ops/deployment/performance-and-scalability), the Istio control plane generally scales as the product of deployment changes, configuration changes, and the number of connected proxies. Ambient multicluster adds two new dimensions to the control plane scalability story: number of remote clusters, and number of remote services. Because the control plane is not programming proxies for remote clusters (assuming a multi-primary deployment topology), adding 10 remote services to the mesh has substantially lower impact on the control plane performance than adding 10 local services.


Our Multicluster Control Plane Load test created 300 services with 4000 endpoints in each of 10 clusters, and added these clusters to the mesh one at a time. The approximate control plane impact of adding a remote cluster at this scale was **1% of a CPU core, and 180 MB of memory**. At this scale, it should be safe to scale well beyond 10 clusters in a mesh with a properly scaled control plane. One item to note is that for Multicluster scalability, horizontally scaling the control plane will not help, as each control plane instance maintains a complete cache of remote services. Instead, we recommend modifying the resource requests and limits of the control plane to scale vertically to meet the needs of your multicluster mesh.
Copy link
Contributor

@Stevenjin8 Stevenjin8 Sep 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should standardize on "multi cluster" vs "multicluster"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multicluster as a single word is the standardized term

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Our Multicluster Control Plane Load test created 300 services with 4000 endpoints in each of 10 clusters, and added these clusters to the mesh one at a time. The approximate control plane impact of adding a remote cluster at this scale was **1% of a CPU core, and 180 MB of memory**. At this scale, it should be safe to scale well beyond 10 clusters in a mesh with a properly scaled control plane. One item to note is that for Multicluster scalability, horizontally scaling the control plane will not help, as each control plane instance maintains a complete cache of remote services. Instead, we recommend modifying the resource requests and limits of the control plane to scale vertically to meet the needs of your multicluster mesh.
Our multicluster control plane load test created 300 services with 4000 endpoints in each of 10 clusters, and added these clusters to the mesh one at a time. The approximate control plane impact of adding a remote cluster at this scale was **1% of a CPU core, and 180 MB of memory**. At this scale, it should be safe to scale well beyond 10 clusters in a mesh with a properly scaled control plane. One item to note is that for multicluster scalability, horizontally scaling the control plane will not help, as each control plane instance maintains a complete cache of remote services. Instead, we recommend modifying the resource requests and limits of the control plane to scale vertically to meet the needs of your multicluster mesh.


## Data plane performance

When traffic is routed to a remote cluster, the originating data plane establishes an encrypted tunnel to the destination cluster's east/west gateway. It then establishes a secondary encrypted tunnel inside the first, which is terminated at the destination data plane. This use of inner and outer tunnels allows the data plane to securely communicate with the remote cluster without knowing the details of which pod IPs represent which services.

This double-encryption does carry some overhead, however. The Data Plane Load test measures the response latency of traffic between pods in the same cluster, versus those in two different clusters, to understand the impact of double encryption on latency. Additionally, double encryption requires double handshakes, which disproportionately affects the latency of new connections to the remote cluster. As you can see below, our initial connections observed an average of 2.2 milliseconds(346%) additional latency, while requests using existing connections observed an increase of 0.13 milliseconds(72%). While these numbers appear significant, it is expected that most multicluster traffic will cross availability zones or regions, and the observed increase in overhead latency will be minimal compared to the overall transit latency between data centers.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its not the double encryption causing 346% increase though, right? At minimum its +1 TCP proxy hop, which is (likely) more expensive than the double TLS.

Is this also tested on a real cross-zone/cross-VPC/etc cloud? Or is the network path ~zero cost in the test? If not, that would be a major factor here as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the tests have no network cost since they're run with kind /cc @Stevenjin8

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the +346% is due to having to reestablish an inner hbone for every connection. These tests were all run in kind locally.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so yeah, its not due to double encryption alone

Copy link
Contributor

@Stevenjin8 Stevenjin8 Sep 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, double encryption does not require double handshakes, we/I were just a bit lazy in our implementation. But we can also make the point that we can speed this up in the future to be roughly on par with request/response numbers

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right, forgot the 346 is for CRR not RR. RR is the more important number anyways

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This double-encryption does carry some overhead, however. The Data Plane Load test measures the response latency of traffic between pods in the same cluster, versus those in two different clusters, to understand the impact of double encryption on latency. Additionally, double encryption requires double handshakes, which disproportionately affects the latency of new connections to the remote cluster. As you can see below, our initial connections observed an average of 2.2 milliseconds(346%) additional latency, while requests using existing connections observed an increase of 0.13 milliseconds(72%). While these numbers appear significant, it is expected that most multicluster traffic will cross availability zones or regions, and the observed increase in overhead latency will be minimal compared to the overall transit latency between data centers.
This double encryption does carry some overhead, however. The data plane load test measures the response latency of traffic between pods in the same cluster, versus those in two different clusters, to understand the impact of double encryption on latency. Additionally, double encryption requires double handshakes, which disproportionately affects the latency of new connections to the remote cluster. As you can see below, our initial connections observed an average of 2.2 milliseconds (346%) additional latency, while requests using existing connections observed an increase of 0.13 milliseconds (72%). While these numbers appear significant, it is expected that most multicluster traffic will cross availability zones or regions, and the observed increase in overhead latency will be minimal compared to the overall transit latency between data centers.


{{< image link="./ambient-mc-dataplane-reconnect.png" caption="request latency with reconnect" width="90%" >}}

{{< image link="./ambient-mc-dataplane-existing.png" caption="request latency without reconnect" width="90%" >}}