-
Notifications
You must be signed in to change notification settings - Fork 1.6k
KEP-5607: Allow hostNetwork pods to use user namespaces #5608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
kep-number: 5607 | ||
alpha: | ||
approver: "@wojtek-t" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,387 @@ | ||
# KEP-5607: Allow HostNetwork Pods to Use User Namespaces | ||
|
||
<!-- toc --> | ||
- [Release Signoff Checklist](#release-signoff-checklist) | ||
- [Summary](#summary) | ||
- [Motivation](#motivation) | ||
- [Goals](#goals) | ||
- [Non-Goals](#non-goals) | ||
- [Proposal](#proposal) | ||
- [User Stories (Optional)](#user-stories-optional) | ||
- [Story 1](#story-1) | ||
- [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) | ||
- [Risks and Mitigations](#risks-and-mitigations) | ||
- [Design Details](#design-details) | ||
- [Test Plan](#test-plan) | ||
- [Prerequisite testing updates](#prerequisite-testing-updates) | ||
- [Unit tests](#unit-tests) | ||
- [Integration tests](#integration-tests) | ||
- [e2e tests](#e2e-tests) | ||
- [Graduation Criteria](#graduation-criteria) | ||
- [Alpha](#alpha) | ||
- [Beta](#beta) | ||
- [GA](#ga) | ||
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) | ||
- [Version Skew Strategy](#version-skew-strategy) | ||
- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) | ||
- [Feature Enablement and Rollback](#feature-enablement-and-rollback) | ||
- [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) | ||
- [Monitoring Requirements](#monitoring-requirements) | ||
- [Dependencies](#dependencies) | ||
- [Scalability](#scalability) | ||
- [Troubleshooting](#troubleshooting) | ||
- [Implementation History](#implementation-history) | ||
- [Drawbacks](#drawbacks) | ||
- [Alternatives](#alternatives) | ||
- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) | ||
<!-- /toc --> | ||
|
||
## Release Signoff Checklist | ||
|
||
<!-- | ||
**ACTION REQUIRED:** In order to merge code into a release, there must be an | ||
issue in [kubernetes/enhancements] referencing this KEP and targeting a release | ||
milestone **before the [Enhancement Freeze](https://git.k8s.io/sig-release/releases) | ||
of the targeted release**. | ||
|
||
For enhancements that make changes to code or processes/procedures in core | ||
Kubernetes—i.e., [kubernetes/kubernetes], we require the following Release | ||
Signoff checklist to be completed. | ||
|
||
Check these off as they are completed for the Release Team to track. These | ||
checklist items _must_ be updated for the enhancement to be released. | ||
--> | ||
|
||
Items marked with (R) are required *prior to targeting to a milestone / release*. | ||
|
||
- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) | ||
- [ ] (R) KEP approvers have approved the KEP status as `implementable` | ||
- [ ] (R) Design details are appropriately documented | ||
- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) | ||
- [ ] e2e Tests for all Beta API Operations (endpoints) | ||
- [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) | ||
- [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free | ||
- [ ] (R) Graduation criteria is in place | ||
- [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) within one minor version of promotion to GA | ||
- [ ] (R) Production readiness review completed | ||
- [ ] (R) Production readiness review approved | ||
- [ ] "Implementation History" section is up-to-date for milestone | ||
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] | ||
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes | ||
|
||
<!-- | ||
**Note:** This checklist is iterative and should be reviewed and updated every time this enhancement is being considered for a milestone. | ||
--> | ||
|
||
[kubernetes.io]: https://kubernetes.io/ | ||
[kubernetes/enhancements]: https://git.k8s.io/enhancements | ||
[kubernetes/kubernetes]: https://git.k8s.io/kubernetes | ||
[kubernetes/website]: https://git.k8s.io/website | ||
|
||
## Summary | ||
|
||
This KEP proposes introducing a new feature gate to allow Pods to have both `hostNetwork` enabled and user namespaces enabled (by setting `hostUsers: false`). | ||
|
||
## Motivation | ||
|
||
The primary motivation is to enhance the security of Kubernetes control plane components. Many control plane components, such as the `kube-apiserver` and `kube-controller-manager` often run as static Pods and are configured with `hostNetwork: true` to bind to node ports or interact directly with the host's network stack. | ||
|
||
Currently, a validation rule in the kube-apiserver prevents the combination of `hostNetwork: true` and `hostUsers: false`. This KEP aims to remove that barrier. | ||
|
||
### Goals | ||
|
||
* Introduce a new, separate alpha feature gate: `UserNamespacesHostNetworkSupport`. | ||
|
||
* When this feature gate is enabled, modify the Pod validation logic to allow Pod specs where `spec.hostNetwork` is true and `spec.hostUsers` is false. | ||
|
||
### Non-Goals | ||
|
||
Including this functionality as part of the `UserNamespacesSupport` feature gate. As `UserNamespacesSupport` is nearing GA, it would be unwise to add a new, unstable feature with external dependencies. | ||
|
||
## Proposal | ||
|
||
We propose the introduction of a new feature gate named `UserNamespacesHostNetworkSupport`. | ||
|
||
When this feature gate is disabled (the default state), the kube-apiserver will maintain the current validation behavior, rejecting any Pod spec that includes both `spec.hostNetwork: true` and `spec.hostUsers: false`. | ||
|
||
When the `UserNamespacesHostNetworkSupport` feature gate is enabled, we will relax this validation check. | ||
The kube-apiserver will accept such a Pod spec and pass it on to the kubelet. | ||
At this point, the responsibility for successfully creating and running the Pod shifts to the container runtime. | ||
If the low-level container runtime (e.g., containerd/runc) does not support this combination, the pod will remain stuck in the `ContainerCreating` state and report an exception event, which is the expected behavior. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we go with this proposal, we should include making it work with containers/crio/runc. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. containerd needs changes for this, I think runc too. I'm unsure about crio and crun. @giuseppe ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. crun supports it. I am not sure about CRI-O but I don't see any explicit check to prevent that combination There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This has been added as a graduation requirement for the beta phase. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that is a risk - I would like this to be mentioned in the Risks section with the discussion on mitigations (ensuring that this works with primary container runtimes is some kind of mitigation. The question is if we can have others (e.g. based on node declared features: #5347 ) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have added this point to the "Risks and Mitigations" section :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also - will this really be stuck in this state or will it go the some Failure state? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I have tested it. |
||
|
||
This change will primarily involve modifying the Pod validation function in pkg/apis/core/validation/validation.go to account for the state of the new feature gate. | ||
|
||
### User Stories (Optional) | ||
|
||
#### Story 1 | ||
As a cluster administrator, I want to enable user namespaces for my control plane static Pods (e.g., kube-apiserver, kube-controller-manager) to follow the principle of least privilege and reduce the attack surface. These Pods need to use hostNetwork to interact correctly with the cluster network. By enabling the new feature gate, I can add a critical layer of security isolation to these vital components without changing their networking model. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeap, this will need quite some documentation. To make sure users understand you probably can't bind on privileged ports even if you have cap whatever or maybe even the sysctl to change the privileged port range is ineffective too. But LGTM There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I assume that capabilities such as Please correct me if I'm wrong, as I am not deeply familiar with this area. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. SGTM. I'd say let's document this in alpha, but I don't oppose as doing it for beta. I don't see why to postpone it :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. During the alpha stage, this feature is not accessible to users, as we are still awaiting runtime support across the board. Additionally, we need to wait until runtime support is in place to finalize the scope of this feature :) |
||
|
||
|
||
### Notes/Constraints/Caveats (Optional) | ||
|
||
### Risks and Mitigations | ||
|
||
If either the container runtime or the underlying container runtime does not support this feature, the container will fail to be created. To mitigate this issue, we will keep this feature in the alpha stage until mainstream container runtimes (containerd/runc) and mainstream underlying container runtimes (runc/crun) both support it, before promoting it to beta. | ||
|
||
Users might upgrade the container runtime to a newer version on some nodes first, but pods could still be scheduled onto nodes that do not support this feature. In such cases, users can leverage [Node Declared Features](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/5328-node-declared-features) to avoid this problem. | ||
|
||
|
||
## Design Details | ||
|
||
The core design change is very simple: in the apiserver's Pod validation logic, locate the code block that prevents the `hostNetwork: true` and `hostUsers: false` combination, and wrap it in a conditional that only executes the validation if the `UserNamespacesHostNetworkSupport` feature gate is disabled. | ||
``` | ||
Comment on lines
+131
to
+132
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ups, it is a bit more complex than that, basically we are going to relax validation, right?
and see https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/feature-gates.md#validation for the rationale |
||
func validateHostUsers(spec *core.PodSpec, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { | ||
allErrs := field.ErrorList{} | ||
|
||
// ... existing validations ... | ||
|
||
// Note we already validated above spec.SecurityContext is not nil. | ||
if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesHostNetworkSupport) && spec.SecurityContext.HostNetwork { | ||
allErrs = append(allErrs, field.Forbidden(fldPath.Child("hostNetwork"), "when `hostUsers` is false")) | ||
} | ||
|
||
// ... existing validations ... | ||
|
||
return allErrs | ||
} | ||
|
||
``` | ||
|
||
### Test Plan | ||
|
||
[ ] I/we understand the owners of the involved components may require updates to | ||
existing tests to make this code solid enough prior to committing the changes necessary | ||
to implement this enhancement. | ||
|
||
##### Prerequisite testing updates | ||
|
||
##### Unit tests | ||
|
||
- `pkg/apis/core/validation`: `2025-10-03` - `85.1%` | ||
|
||
##### Integration tests | ||
|
||
##### e2e tests | ||
|
||
- Add e2e tests to ensure that pods with the combination of `hostNetwork: true` and `hostUsers: false` can run properly. | ||
|
||
### Graduation Criteria | ||
|
||
#### Alpha | ||
|
||
- The `UserNamespacesHostNetworkSupport` feature gate is implemented and disabled by default. | ||
|
||
#### Beta | ||
|
||
- Mainstream container runtimes and low-level container runtimes (e.g., containerd/CRI-O, runc/crun) have released generally available versions that support the concurrent use of `hostNetwork` and user namespaces. | ||
- Add e2e tests to ensure feature availability. | ||
- Document the limitations of combining user namespaces and `hostNetwork` (e.g., CAP_NET_RAW, CAP_NET_ADMIN, CAP_NET_BIND_SERVICE remain restricted). | ||
|
||
#### GA | ||
|
||
- The feature has been stable in Beta for at least 2 Kubernetes releases. | ||
- Multiple major container runtimes support the feature. | ||
|
||
|
||
### Upgrade / Downgrade Strategy | ||
|
||
Upgrade: After upgrading to a version that supports this KEP, the `UserNamespacesHostNetworkSupport` feature gate can be enabled at any time. | ||
|
||
Downgrade: If downgraded to a version that does not support this KEP, kube-apiserver will revert to strict validation. Pods already running with this configuration will remain unaffected and can still be updated, but new Pod creation requests attempting to use this configuration will be rejected. | ||
|
||
### Version Skew Strategy | ||
|
||
A newer kube-apiserver with this feature enabled will accept such a Pod. | ||
|
||
An older kubelet will still get the Pod definition from the kube-apiserver. | ||
It will attempt to create the Pod, and the success or failure will depend on the version of the container runtime it is using. | ||
|
||
## Production Readiness Review Questionnaire | ||
|
||
### Feature Enablement and Rollback | ||
|
||
###### How can this feature be enabled / disabled in a live cluster? | ||
|
||
- [ ] Feature gate (also fill in values in `kep.yaml`) | ||
- Feature gate name: `UserNamespacesHostNetworkSupport` | ||
- Components depending on the feature gate: `kube-apiserver` | ||
- [ ] Other | ||
- Describe the mechanism: | ||
- Will enabling / disabling the feature require downtime of the control | ||
plane? | ||
- Will enabling / disabling the feature require downtime or reprovisioning | ||
of a node? | ||
|
||
###### Does enabling the feature change any default behavior? | ||
No. The behavior only changes when a user explicitly sets both `hostNetwork: true` and `hostUsers: false` in a Pod spec. | ||
The behavior of all existing Pods is unaffected. | ||
|
||
###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? | ||
|
||
Yes. It can be disabled by setting the feature gate to false and restarting the kube-apiserver. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The answer here isn't really "yes". It should be more like "Partially". That actually triggers the question - if it is safe enough design. I'm leaning towards saying "no". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problem with the above is that restarting kubelet will not lead to restarting of the pods anyway... So the alternative (based on on discussions on slack) is saying that we allow disabling, but providing that pods using this feature will be explicitly purged. Describing that (and saying explicitly that if not purged they will continue to run in the "disabled" mode is also an option here). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This is not a common use case for kubelet. Kubelet typically does not reject pods during admission based on "node features". Perhaps we could add such a mechanism, but when this KEP is removed, it would simply run as a no-op.
This feature gate was designed to only involve kube-apiserver. Therefore, when disabled, I don't think we can do much about already running workloads. I believe we can only allow them to continue running unless they need to be recreated for some other reason. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. See my second update then... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've added an explanation, but I'm not sure if I correctly understood the meaning of the "disabled" mode. My understanding is: "Pods already running in the cluster with this configuration can continue running and can also be updated." Please correct me if I'm wrong :) |
||
This restores the old validation logic. | ||
When disabled, Pods using this feature must be explicitly terminated. If not terminated, | ||
they will run in "disabled" mode — meaning Pods already running in the cluster with this combination can continue running and can also be updated. | ||
|
||
###### What happens if we reenable the feature if it was previously rolled back? | ||
The kube-apiserver will once again begin to accept the combination of `hostNetwork: true` and `hostUsers: false`. | ||
This is a stateless change, and reenabling is safe. | ||
|
||
###### Are there any tests for feature enablement/disablement? | ||
HirazawaUi marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
During the alpha stage, unit tests for enabling and disabling the toggle functionality will be added to the validation code. Manual testing will also be conducted during the beta stage, and the testing process will be documented here. | ||
|
||
### Rollout, Upgrade and Rollback Planning | ||
|
||
###### How can a rollout or rollback fail? Can it impact already running workloads? | ||
|
||
The [Version Skew Strategy](#version-skew-strategy) section covers this point. | ||
|
||
###### What specific metrics should inform a rollback? | ||
|
||
N/A | ||
|
||
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? | ||
|
||
This will be validated via manual testing. | ||
|
||
###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? | ||
|
||
No. | ||
|
||
### Monitoring Requirements | ||
|
||
<!-- | ||
This section must be completed when targeting beta to a release. | ||
|
||
For GA, this section is required: approvers should be able to confirm the | ||
previous answers based on experience in the field. | ||
--> | ||
|
||
###### How can an operator determine if the feature is in use by workloads? | ||
|
||
<!-- | ||
Ideally, this should be a metric. Operations against the Kubernetes API (e.g., | ||
checking if there are objects with field X set) may be a last resort. Avoid | ||
logs or events for this purpose. | ||
--> | ||
|
||
###### How can someone using this feature know that it is working for their instance? | ||
|
||
<!-- | ||
For instance, if this is a pod-related feature, it should be possible to determine if the feature is functioning properly | ||
for each individual pod. | ||
Pick one more of these and delete the rest. | ||
Please describe all items visible to end users below with sufficient detail so that they can verify correct enablement | ||
and operation of this feature. | ||
Recall that end users cannot usually observe component logs or access metrics. | ||
--> | ||
|
||
- [ ] Events | ||
- Event Reason: | ||
- [ ] API .status | ||
- Condition name: | ||
- Other field: | ||
- [ ] Other (treat as last resort) | ||
- Details: | ||
|
||
###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? | ||
|
||
<!-- | ||
This is your opportunity to define what "normal" quality of service looks like | ||
for a feature. | ||
|
||
It's impossible to provide comprehensive guidance, but at the very | ||
high level (needs more precise definitions) those may be things like: | ||
- per-day percentage of API calls finishing with 5XX errors <= 1% | ||
- 99% percentile over day of absolute value from (job creation time minus expected | ||
job creation time) for cron job <= 10% | ||
- 99.9% of /health requests per day finish with 200 code | ||
|
||
These goals will help you determine what you need to measure (SLIs) in the next | ||
question. | ||
--> | ||
|
||
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? | ||
|
||
<!-- | ||
Pick one more of these and delete the rest. | ||
--> | ||
|
||
- [ ] Metrics | ||
- Metric name: | ||
- [Optional] Aggregation method: | ||
- Components exposing the metric: | ||
- [ ] Other (treat as last resort) | ||
- Details: | ||
|
||
###### Are there any missing metrics that would be useful to have to improve observability of this feature? | ||
|
||
<!-- | ||
Describe the metrics themselves and the reasons why they weren't added (e.g., cost, | ||
implementation difficulties, etc.). | ||
--> | ||
|
||
### Dependencies | ||
|
||
###### Does this feature depend on any specific services running in the cluster? | ||
|
||
No | ||
|
||
### Scalability | ||
|
||
###### Will enabling / using this feature result in any new API calls? | ||
No. | ||
|
||
###### Will enabling / using this feature result in introducing new API types? | ||
No. | ||
|
||
###### Will enabling / using this feature result in any new calls to the cloud provider? | ||
No. | ||
|
||
###### Will enabling / using this feature result in increasing size or count of the existing API objects? | ||
No. | ||
|
||
###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? | ||
No. | ||
|
||
###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? | ||
No. | ||
|
||
###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? | ||
No. | ||
|
||
### Troubleshooting | ||
|
||
###### How does this feature react if the API server and/or etcd is unavailable? | ||
No impact to the running workloads | ||
|
||
###### What are other known failure modes? | ||
If the container runtime or low-level runtime (e.g., containerd/runc) does not support the combination of hostNetwork and user namespaces, the pod will remain stuck in the `ContainerCreating` state and fail to be created. | ||
|
||
###### What steps should be taken if SLOs are not being met to determine the problem? | ||
|
||
N/A | ||
|
||
## Implementation History | ||
|
||
* 2025-10-03: Initial proposal | ||
|
||
## Drawbacks | ||
|
||
There are no known drawbacks at this time. | ||
|
||
|
||
## Alternatives | ||
|
||
Add this feature to the existing `UserNamespacesSupport` feature gate: | ||
|
||
* This was ruled out because the `UserNamespacesSupport` feature is approaching GA, and its functionality should be stable. | ||
Adding a new, externally-dependent, and immature behavior to a nearly-GA feature would introduce unnecessary risk and delays. Keeping the two feature gates separate is cleaner and safer. | ||
|
||
Do not implement this feature: | ||
* Users can use `hostPort` as an alternative to `hostNetwork`, but this may cause some disruption to the existing user environment, as certain privileged containers require direct interaction with the host network stack. Moreover, `hostPort` requires pre-configured CNI; otherwise, the pod will fail to start. This limitation is precisely why Kubernetes control plane components continue to rely on `hostNetwork`. | ||
|
||
## Infrastructure Needed (Optional) | ||
|
||
No new infrastructure needed. |
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this rule?ok, I misunderstood the sentence, there is a validation logic for Pods that preventshostNetwork: true and hostUsers: false
, I was understanding something different