Deployment

A Deployment is a higher-level controller that manages stateless ReplicaSets.

It provides declarative updates to applications (e.g., rolling updates, rollbacks)

When you create a Deployment:

  • It creates a ReplicaSet

  • ReplicaSet creates Pods

If you change your Deployment (e.g., new image version), Kubernetes will:

  • Create a new ReplicaSet

  • Slowly scale down the old one and scale up the new one (rolling update).

After a re-deployment, old replicatsets are not deleted, in case there is a need to regress to them.

Why Deployments exist?

To enable:

  • Zero-downtime upgrades

  • Rollbacks

  • History tracking

  • Scaling apps easily

Why Deployments are stateless?

  1. Pods are ephemeral

    • Pods managed by a Deployment can be killed, restarted, rescheduled anytime.

    • They may come up on a different node with a new IP and hostname.

    • If they stored important data inside their filesystem → that data would be lost.


  1. ReplicaSets scale dynamically

    • Deployments create ReplicaSets, which spin up/destroy Pods dynamically.

    • When you scale up, new Pods appear on random nodes.

    • When you scale down, Pods are deleted (with their data).


  1. Rolling updates replace Pods

    • During upgrades, old Pods are terminated and replaced with new ones.

    • If Pods held data locally, you’d lose it every time you update your Deployment.


  1. Networking & identity are not stable

    • Pods in a Deployment get random names and IPs (myapp-7d9c8c9c7f-abc12).

    • They are interchangeable — meaning Kubernetes treats them as identical, stateless workers.

Commands

To apply and run a Deployment configuration:

kubectl apply -f deployment.yaml

To list the Deployments:

kubectl get deployments

To describe a Deployment:

kubectl describe deployment <deployment-name>

To delete a Deployment:

kubectl delete deployment <deployment-name>

To update image through the CLI:

kubectl set image deployment/<deployment-name> nginx=nginx:1.25

To get the rollout history of Deployments:

kubectl rollout history deployment <deployment-name>

To rollback a Deployment to a previous version:

kubectl rollout undo deployment <deployment-name>

# To a specific revision number
kubectl rollout undo deployment <deployment-name> --to-revision=<rev-number>

To scale Deployments:

kubectl scale deployment <deployment-name> --replicas=5

Check consumption of a Deployment:

kubectl top deployment <deployment-name>

Example

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # allow 1 extra Pod during update
      maxUnavailable: 1  # take down at most 1 Pod at a time
  # Pod template
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp-container
          image: nginx:1.25
          ports:
            - containerPort: 80
          env:
            - name: NODE_ENV
              value: "production"

Limiting Resources

Also make sure that the sum of limits resources for all Pods don't exceed the available server resources, to be more conservative.

Or to be more agressive or economic you can "overbook" limits at the risk of some Pods failing if they all start reach limits.

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  ...
  # Pod template
  template:
    ...
    spec:
      resources:
        # Minimum required to get started (Reserved)
        requests:
          memory: "100Mi"
          cpu: 0.5
        # Up to
        limits:
          memory: "200Mi"
          cpu: 1
      containers:
        - name: myapp-container
          image: nginx:1.25
          ports:
            - containerPort: 80

Create a pod with resource requests and limits at both pod-level and container-level

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  ...
  # Pod template
  template:
    ...
    spec:
      resources:
        requests:
          memory: "100Mi"
          cpu: 1
        limits:
          memory: "200Mi"
          cpu: 1
      containers:
        - name: myapp-container
          image: nginx:1.25
          ports:
            - containerPort: 80
          resources:
            requests:
              memory: "50Mi"
              cpu: 0.5
            limits:
              memory: "200Mi"
              cpu: 0.5

Last updated