|
1 | 1 | # Karpenter
|
2 | 2 |
|
3 |
| -[Karpenter](https://karpenter.sh) is a Kubernetes-native capacity manager that directly provisions Nodes and underlying instances based on Pod requirements. On AWS, kOps supports managing an InstanceGroup with either Karpenter or an AWS Auto Scaling Group (ASG). |
| 3 | +[Karpenter](https://karpenter.sh) is an open-source node lifecycle management project built for Kubernetes. |
| 4 | +Adding Karpenter to a Kubernetes cluster can dramatically improve the efficiency and cost of running workloads on that cluster. |
| 5 | +On AWS, kOps supports managing an InstanceGroup with either Karpenter or an AWS Auto Scaling Group (ASG). |
| 6 | + |
| 7 | +## Prerequisites |
| 8 | + |
| 9 | +Managed Karpenter requires kOps 1.34+ and that [IAM Roles for Service Accounts (IRSA)](/cluster_spec#service-account-issuer-discovery-and-aws-iam-roles-for-service-accounts-irsa) be enabled for the cluster. |
4 | 10 |
|
5 | 11 | ## Installing
|
6 | 12 |
|
7 |
| -If using kOps 1.26 or older, enable the Karpenter feature flag : |
| 13 | +### New clusters |
8 | 14 |
|
9 | 15 | ```sh
|
10 |
| -export KOPS_FEATURE_FLAGS="Karpenter" |
11 |
| -``` |
| 16 | +export KOPS_STATE_STORE="s3://my-state-store" |
| 17 | +export KOPS_DISCOVERY_STORE="s3://my-discovery-store" |
| 18 | +export NAME="my-cluster.example.com" |
| 19 | +export ZONES="eu-central-1a" |
12 | 20 |
|
13 |
| -Karpenter requires that external permissions for ServiceAccounts be enabled for the cluster. See [AWS IAM roles for ServiceAccounts documentation](/cluster_spec#service-account-issuer-discovery-and-aws-iam-roles-for-service-accounts-irsa) for how to enable this. |
| 21 | +kops create cluster --name ${NAME} \ |
| 22 | + --cloud=aws \ |
| 23 | + --instance-manager=karpenter \ |
| 24 | + --discovery-store=${KOPS_DISCOVERY_STORE} \ |
| 25 | + --zones=${ZONES} \ |
| 26 | + --yes |
| 27 | + |
| 28 | +kops validate cluster --name ${NAME} --wait=10m |
| 29 | + |
| 30 | +kops export kubeconfig --name ${NAME} --admin |
| 31 | +``` |
14 | 32 |
|
15 | 33 | ### Existing clusters
|
16 | 34 |
|
17 |
| -On existing clusters, you can create a Karpenter InstanceGroup by adding the following to its InstanceGroup spec: |
| 35 | +The Karpenter addon must be enabled in the cluster spec: |
18 | 36 |
|
19 | 37 | ```yaml
|
20 | 38 | spec:
|
21 |
| - manager: Karpenter |
| 39 | + karpenter: |
| 40 | + enabled: true |
22 | 41 | ```
|
23 | 42 |
|
24 |
| -You also need to enable the Karpenter addon in the cluster spec: |
| 43 | +To create a Karpenter InstanceGroup, set the following in its InstanceGroup spec: |
25 | 44 |
|
26 | 45 | ```yaml
|
27 | 46 | spec:
|
28 |
| - karpenter: |
29 |
| - enabled: true |
| 47 | + manager: Karpenter |
30 | 48 | ```
|
31 | 49 |
|
32 |
| -### New clusters |
33 |
| -
|
34 |
| -On new clusters, you can simply add the `--instance-manager=karpenter` flag: |
| 50 | +### EC2NodeClass and NodePool |
35 | 51 |
|
36 | 52 | ```sh
|
37 |
| -kops create cluster --name mycluster.example.com --cloud aws --networking=amazonvpc --zones=eu-central-1a,eu-central-1b --master-count=3 --yes --discovery-store=s3://discovery-store/ |
| 53 | +USER_DATA=$(aws s3 cp ${KOPS_STATE_STORE}/${NAME}/igconfig/node/nodes/nodeupscript.sh -) |
| 54 | +USER_DATA=${USER_DATA//$'\n'/$'\n '} |
| 55 | + |
| 56 | +kubectl apply -f - <<YAML |
| 57 | +apiVersion: karpenter.k8s.aws/v1 |
| 58 | +kind: EC2NodeClass |
| 59 | +metadata: |
| 60 | + name: default |
| 61 | +spec: |
| 62 | + amiFamily: Custom |
| 63 | + amiSelectorTerms: |
| 64 | + - ssmParameter: /aws/service/canonical/ubuntu/server/24.04/stable/current/amd64/hvm/ebs-gp3/ami-id |
| 65 | + - ssmParameter: /aws/service/canonical/ubuntu/server/24.04/stable/current/arm64/hvm/ebs-gp3/ami-id |
| 66 | + associatePublicIPAddress: true |
| 67 | + tags: |
| 68 | + KubernetesCluster: ${NAME} |
| 69 | + kops.k8s.io/instancegroup: nodes |
| 70 | + k8s.io/role/node: "1" |
| 71 | + subnetSelectorTerms: |
| 72 | + - tags: |
| 73 | + KubernetesCluster: ${NAME} |
| 74 | + securityGroupSelectorTerms: |
| 75 | + - tags: |
| 76 | + KubernetesCluster: ${NAME} |
| 77 | + Name: nodes.${NAME} |
| 78 | + instanceProfile: nodes.${NAME} |
| 79 | + userData: | |
| 80 | + ${USER_DATA} |
| 81 | +YAML |
| 82 | + |
| 83 | +kubectl apply -f - <<YAML |
| 84 | +apiVersion: karpenter.sh/v1 |
| 85 | +kind: NodePool |
| 86 | +metadata: |
| 87 | + name: default |
| 88 | +spec: |
| 89 | + template: |
| 90 | + spec: |
| 91 | + requirements: |
| 92 | + - key: kubernetes.io/arch |
| 93 | + operator: In |
| 94 | + values: ["amd64", "arm64"] |
| 95 | + - key: kubernetes.io/os |
| 96 | + operator: In |
| 97 | + values: ["linux"] |
| 98 | + - key: karpenter.sh/capacity-type |
| 99 | + operator: In |
| 100 | + values: ["on-demand", "spot"] |
| 101 | + nodeClassRef: |
| 102 | + group: karpenter.k8s.aws |
| 103 | + kind: EC2NodeClass |
| 104 | + name: default |
| 105 | +YAML |
38 | 106 | ```
|
39 | 107 |
|
40 | 108 | ## Karpenter-managed InstanceGroups
|
41 | 109 |
|
42 |
| -A Karpenter-managed InstanceGroup controls a corresponding Karpenter Provisioner resource. kOps will ensure that the Provisioner is configured with the correct AWS security groups, subnets, and launch templates. Just like with ASG-managed InstanceGroups, you can add labels and taints to Nodes and kOps will ensure those are added accordingly. |
43 |
| - |
44 |
| -Note that not all features of InstanceGroups are supported. |
45 |
| - |
46 |
| -## Subnets |
47 |
| - |
48 |
| -By default, kOps will tag subnets with `kops.k8s.io/instance-group/<intancegroup>: "true"` for each InstanceGroup the subnet is assigned to. If you enable manual tagging of subnets, you have to ensure these tags are added, if not Karpenter will fail to provision any instances. |
49 |
| -
|
50 |
| -## Instance Types |
51 |
| -
|
52 |
| -If you do not specify a mixed instances policy, only the instance type specified by `spec.machineType` will be used. With Karpenter, one typically wants a wider range of instances to choose from. kOps supports both providing a list of instance types through `spec.mixedInstancesPolicy.instances` and providing instance type requirements through `spec.mixedInstancesPolicy.instanceRequirements`. See (/instance_groups)[InstanceGroup documentation] for more details. |
| 110 | +A Karpenter-managed InstanceGroup controls the bootstrap script. kOps will ensure the correct AWS security groups, subnets and permissions. |
| 111 | +`EC2NodeClass` and `NodePool` objects must be created by the cluster operator. |
53 | 112 |
|
54 | 113 | ## Known limitations
|
55 | 114 |
|
56 |
| -### Karpenter-managed Launch Templates |
57 |
| - |
58 |
| -On EKS, Karpener creates its own launch templates for Provisioners. These launch templates will not work with a kOps cluster for a number of reasons. Most importantly, they do not use supported AMIs and they do not install and configure nodeup, the instance-side kOps component. The Karpenter features that require Karpenter to directly manage launch templates will not be available on kOps. |
59 |
| - |
60 |
| -### Unmanaged Provisioner resources |
61 |
| - |
62 |
| -As mentioned above, kOps will manage a Provisioner resource per InstanceGroup. It is technically possible to create Provsioner resources directly, but you have to ensure that you configure Provisioners according to kOps requirements. As mentioned above, Karpenter-managed launch templates do not work and you have to maintain your own kOps-compatible launch templates. |
63 |
| - |
64 |
| -### Other minor limitations |
65 |
| - |
66 |
| -* Control plane nodes must be provisioned with an ASG, not Karpenter. |
67 |
| -* Provisioners will unconditionally use spot with a fallback on ondemand instances. |
68 |
| -* Provisioners will unconditionally include burstable instance groups such as the T3 instance family. |
69 |
| -* kOps will not allow mixing arm64 and amd64 instances in the same Provider. |
| 115 | +* **Upgrade is not supported** from the previous version of managed Karpenter. |
| 116 | +* Control plane nodes must be provisioned with an ASG. |
| 117 | +* All `EC2NodeClass` objects must have the `spec.amiFamily` set to `Custom`. |
| 118 | +* `spec.instanceStorePolicy` configuration is not supported in `EC2NodeClass`. |
| 119 | +* `spec.kubelet`, `spec.taints` and `spec.labels` configuration are not supported in `EC2NodeClass`, but they can be configured in the `Cluster` or `InstanceGroup` spec. |
0 commit comments