· 7 min read

Kubernetes 1.36 and OpenShift 4.22: What Platform Engineers Need to Know

kubernetes openshift platform-engineering devops

The Kubernetes 1.36 code freeze landed on March 18, and the GA release is scheduled for April 22. That makes this a good moment to look at what is coming before it drops. OpenShift 4.22 is currently at EC3 with GA expected in the coming months, so both releases are close enough to plan for but not yet in production for most teams. I used Kubernetes almost daily for the last 8 years now in client environments and in my homelab running on a Turing Pi. I work with OpenShift professionally, currently at the Schiphol Group. That context shapes how I look at these releases, and I want to share what I think actually matters for platform engineering work.

The Release Timeline

Before digging into features, there is one important thing to understand about how Kubernetes and OpenShift versioning relate to each other. OpenShift 4.22 EC3 ships with Kubernetes 1.34.2. That means the Kubernetes 1.36 features covered here will not land in OpenShift 4.22 at GA. Red Hat targets roughly one minor Kubernetes version per OCP minor release, so 1.36 features will most likely arrive in OCP 4.23 or 4.24, sometime in late 2026.

This is not a criticism of OpenShift. The integration work, additional hardening, and stability guarantees that come with an enterprise Kubernetes distribution take time. It is just worth being clear about so you do not build a roadmap that assumes 1.36 features in OCP 4.22. Plan for them landing in OCP in late 2026 if you are on the OpenShift track.

What’s Graduating to Stable in Kubernetes 1.36

User Namespaces in Pods

This one has been in progress for a while and it is finally graduating to stable in 1.36. User Namespaces allow a pod to map container UIDs to unprivileged host UIDs, so a process running as root inside a container is not actually root on the node. For multi-tenant clusters or any environment where you need stronger workload isolation without reaching for a full sandbox runtime like gVisor, this is a meaningful improvement.

Enabling it is straightforward:

spec:
  hostUsers: false
  containers:
    - name: app
      image: my-app:latest

Setting hostUsers: false tells Kubernetes to create a new user namespace for the pod. The container thinks it is root; the host knows better.

Mutating Admission Policies

This is probably my favourite graduation in 1.36. MutatingAdmissionPolicy brings CEL-based mutation directly into the API server, the same way ValidatingAdmissionPolicy did for validation. That means you can retire a lot of simple MutatingWebhookConfigurations that exist purely to inject labels, set default fields, or strip annotations.

A basic example looks like this:

apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicy
metadata:
  name: set-default-resource-limits
spec:
  matchConstraints:
    resourceRules:
      - apiGroups: [""]
        apiVersions: ["v1"]
        operations: ["CREATE"]
        resources: ["pods"]
  mutations:
    - patchType: ApplyConfiguration
      applyConfiguration:
        expression: |
          Object{
            spec: Object.spec{
              containers: object.spec.containers.map(c, Object.spec.containers.item{
                resources: Object.spec.containers.item.resources{
                  limits: c.resources.limits.orValue({"cpu": "500m", "memory": "512Mi"})
                }
              })
            }
          }

Removing a webhook deployment for every simple mutation policy simplifies your platform considerably. Fewer moving parts, less latency in admission, and no webhook cert rotation to worry about.

Fine-Grained Kubelet API Authorization

Monitoring agents running on nodes have historically needed fairly broad access to the kubelet API to do their job. This change introduces more granular permissions, so you can grant an agent access to metrics endpoints without giving it access to exec or log endpoints. For anyone operating with a least-privilege posture this is exactly the kind of incremental improvement that actually gets implemented, unlike big architectural changes that look great on paper.

VolumeGroupSnapshot

VolumeGroupSnapshot graduates to stable and allows you to take coordinated snapshots across multiple PVCs in a single atomic operation. The practical case this solves well is databases that separate their data volume from their WAL volume. Taking snapshots of those independently introduces a window where you could end up with an inconsistent backup. A group snapshot eliminates that window.

CSI Service Account Tokens

CSI drivers that talk to cloud storage APIs now get properly scoped service account tokens instead of relying on node-level credentials. For cloud-native storage drivers this reduces the blast radius if a node is compromised, and it aligns with how most teams are already trying to handle workload identity.

Dynamic Resource Allocation (DRA) Keeps Maturing

DRA is the framework for managing specialised hardware resources like GPUs, FPGAs, and network devices. If you are not following it closely, the short version is that it replaces the older device plugin model with something far more expressive and composable. Kubernetes 1.36 brings several meaningful improvements.

The PodResources extension for DRA is graduating to stable. Monitoring agents can now see which DRA-allocated devices are assigned to which pods, which was a gap that made observability for GPU workloads harder than it needed to be.

OpenShift 4.22

OpenShift 4.22 is in EC3 and GA is expected within the next few months. Remember that it ships with Kubernetes 1.34, so the 1.36 features above are not part of this release.

What is notable in 4.22 is that Gateway API is now available without requiring OLM. That removes one adoption barrier for teams that want to use Gateway API as their ingress abstraction but do not want to manage additional operators just to get there.

AdminNetworkPolicy graduates as a first-class network control primitive. This is significant for teams managing shared clusters where different tenants need isolation without giving each team the ability to mess with each other’s network policies. Here is what a basic tenant isolation policy looks like:

apiVersion: policy.networking.k8s.io/v1alpha1
kind: AdminNetworkPolicy
metadata:
  name: deny-inter-tenant
spec:
  priority: 50
  subject:
    namespaces:
      matchLabels:
        tenant: shared
  ingress:
    - name: "deny-from-other-tenants"
      action: Deny
      from:
        - namespaces:
            matchLabels:
              tenant: shared

Platform operators set these policies at a cluster level and they take precedence over namespace-level NetworkPolicies. That gives you a clear separation between platform-enforced controls and team-level policies.

On the hardware side, OCP 4.22 is picking up DRA device partitioning support from upstream early. If you are running GPU workloads on OpenShift this is worth watching.

What I’m Most Excited About

Mutating Admission Policies going stable changes how I think about platform tooling. Every project I work on has a collection of admission webhooks that exist to do relatively simple things: set labels, apply defaults, enforce naming conventions. Each one is a deployment to manage, a certificate to rotate, and a latency hit on every API call. Being able to express that logic in CEL directly in the API server is a genuine simplification. And when you combine Mutating Admission Policies with Validating Admission Policies, the case for running a dedicated policy engine like Kyverno or Gatekeeper gets much weaker. In my experience both tools are excellent, but they also add operational complexity. For most platform teams, the native admission policy primitives now cover the majority of real-world use cases. That is a meaningful shift.

AdminNetworkPolicy in OpenShift is the second pick. Working on shared clusters where teams need isolation but do not have dedicated cluster infrastructure, enforcing network boundaries at the platform level without relying on teams to correctly configure their own NetworkPolicies is exactly the kind of primitive I have wanted.

Wrapping Up

Neither of these releases is GA yet. Kubernetes 1.36 lands April 22, and OpenShift 4.22 GA is expected sometime in the coming months with Kubernetes 1.34 under the hood. The 1.36 features will arrive in OpenShift closer to 4.23 or 4.24.

For OpenShift 4.22, the release notes will be published once GA lands; until then the EC build changelogs in the Red Hat Customer Portal are the most current source.