From c3e05d3d7e287c3fac72e6795573508aab603b8a Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 19 Aug 2025 11:01:32 +0200 Subject: [PATCH 1/3] feat(ins): add slo --- pages/instances/reference-content/instances-datasheet.mdx | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/pages/instances/reference-content/instances-datasheet.mdx b/pages/instances/reference-content/instances-datasheet.mdx index 02b59775f4..1eba413f22 100644 --- a/pages/instances/reference-content/instances-datasheet.mdx +++ b/pages/instances/reference-content/instances-datasheet.mdx @@ -28,7 +28,7 @@ This datasheet provides a concise overview of the performance, technical feature | Resources | Shared vCPUs | | Sizing | 1 vCPU, 1 GiB RAM | | vCPU:RAM ratio | 1:1 | -| SLA | None | +| SLO | None | ## Development and General Purpose Instances @@ -45,6 +45,7 @@ See below the technical specifications of Development and General Purpose Instan | Resources | Shared vCPUs | | Sizing | From 2 to 4 vCPUs
From 2 to 12 GiB RAM | | vCPU:RAM ratio | Various
(1:1, 1:2, 1:3) | +| SLO | None | ## PLAY2 and PRO2 Instances @@ -61,6 +62,7 @@ See below the technical specifications of PLAY2 and PRO2 Instances: | Resources | Shared vCPUs | | Sizing | From 1 to 32 vCPUs
From 2 to 128 GiB RAM | | vCPU:RAM ratio | Various
(1:2, 1:4) | +| SLO | None | ## COP-ARM Instances @@ -77,6 +79,7 @@ The table below displays the technical specifications of COP-ARM Instances: | Resources | Shared vCPUs | | Sizing | From 2 to 128 vCPUs
From 8 to 128 GiB RAM | | vCPU:RAM ratio | 1:4 | +| SLO | None | ## Enterprise Instances @@ -94,6 +97,7 @@ See below the technical specifications of Enterprise Instances: | Security feature | Secure Encrypted Virtualization | | Sizing | From 2 to 96 vCPUs
From 8 GiB to 384 GiB RAM | | vCPU:RAM ratio | 1:4 | +| SLO | None | \* Instances with dedicated vCPU do not share their compute resources with other Instances (1 vCPU = 1 CPU thread dedicated to that Instance). This type of Instance is particularly recommended for running production-grade compute-intensive applications. @@ -112,6 +116,7 @@ See below the technical specifications of Production-Optimized Instances: | Resources | Dedicated vCPUs* | | Sizing | From 2 to 96 vCPUs
From 8 GiB to 384 GiB RAM | | vCPU:RAM ratio | 1:4 | +| SLO | 99.5% availability | \* Instances with dedicated vCPU do not share their compute resources with other Instances (1 vCPU = 1 CPU thread dedicated to that Instance). This type of Instance is particularly recommended for running production-grade compute-intensive applications. @@ -134,5 +139,6 @@ See below the technical specifications of Workload-Optimized Instances: | Security feature | Secure Encrypted Virtualization | | Sizing | From 2 to 64 dedicated vCPUs
From 4 GiB to 512 GiB RAM | | vCPU:RAM ratio | 1:8 (POP2-HM), 1:2 (POP2-HC and POP2-HN) | +| SLO | 99.5% availability | \* Instances with dedicated vCPU do not share their compute resources with other Instances (1 vCPU = 1 CPU thread dedicated to that Instance). This type of Instance is particularly recommended for running production-grade compute-intensive applications. \ No newline at end of file From ae68dc3a977887e27232270849e5a9349b78a320 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 19 Aug 2025 11:04:33 +0200 Subject: [PATCH 2/3] feat(gpu): add gpu slo --- pages/gpu/reference-content/choosing-gpu-instance-type.mdx | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx index f21e31b600..8fab3df67c 100644 --- a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx +++ b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx @@ -108,6 +108,10 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t | Use cases | GenAI (Image/Video) | GenAI (Image/Video) | 7B Text-to-image model fine-tuning / Inference | 70B text-to-image model fine-tuning / Inference | | What they are not made for | | | | | + + The service level objective (SLO) for all GPU Instance types is 99.5% availability. + + ### Scaleway AI Supercomputer | | **[Custom build clusters](https://www.scaleway.com/en/ai-supercomputers/)** (2DGX H100, 16 H100 GPUs) | **[Custom build clusters](https://www.scaleway.com/en/ai-supercomputers/)** (127 DGX H100, 1016 H100 GPUs) | |---------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------| From 47991f6b314c281238cb833061962d4906dc39ae Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 26 Aug 2025 15:18:07 +0200 Subject: [PATCH 3/3] Apply suggestions from code review --- pages/gpu/reference-content/choosing-gpu-instance-type.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx index 8fab3df67c..ccb7ee0ca8 100644 --- a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx +++ b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx @@ -109,7 +109,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t | What they are not made for | | | | | - The service level objective (SLO) for all GPU Instance types is 99.5% availability. + The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/) ### Scaleway AI Supercomputer