How to Drain a Node Pool in Linode Kubernetes Engine

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $100 credit.
This credit will be applied to any valid services used during your first 60 days.

Draining a Node

You can use kubectl drain to safely evict all of the pods from a node before you perform maintenance on the node such as kernel upgrade, hardware maintenance, and others. Safe evictions allow the containers of the pods to gracefully terminate and respect the PodDisruptionBudgets that you specified. For more information see, Disruptions.

Kubernetes workloads move around the cluster, which enables use cases like highly available distributed systems. Linode recommends you to move any data storage on the filesystem of the Linodes in an LKE cluster to Persistent Volumes with network attached storage. Avoid using local storage on LKE nodes whenever possible. If you are using Persistent Volume Claim for the application on an LKE cluster, skip the entire Copy the application data to a Persistent Volume section and proceed directly to Add a new node pool to the cluster and drain the nodes.

This guide provides instructions to:

  • Copy the application data to a Persistent Volume if you are using a local storage to store application data.
  • Add a new node pool to the cluster and then drain the nodes.

Before You Begin

This guide assumes you have a working Linode Kubernetes Engine (LKE) cluster running on Linode and you are familiar with PodDisruptionBudget concept and Configured PodDisruptionBudgets for applications that need them.

  1. Install the Kubernetes CLI (kubectl) on the local computer.

  2. Follow the instructions in Deploying and Managing a Cluster with Linode Kubernetes Engine Tutorial to connect to an LKE cluster.

    Note
    Ensure that the KUBECONFIG context is persistent
  3. Ensure that Kubernetes CLI is using the right cluster context. Run the get-contexts subcommand to check:

    kubectl config get-contexts
    

Copy the application data to a Persistent Volume

Caution

The instructions in this section creates a Block Storage volume billable resource on your Linode account. A single volume can range from 10 GB to 10,000 GB in size and costs $0.10/GB per month or $0.00015/GB per hour. If you do not want to keep using the Block Storage volume that you create, be sure to delete it when you have finished the guide.

If you remove the resources afterward, you are only billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

  1. Create a Persistent Volume Claim (PVC) that consumes a Block Storage Volume. To create a PVC, create a manifest file with the following YAML:

    File: pvc.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-test
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: linode-block-storage-retain

    Note
    To retain the Block Storage Volume and its data, even after the associated PVC is deleted, use the linode-block-storage-retain StorageClass. If, instead, you prefer to have the Block Storage Volume and its data deleted along with its PVC, use the linode-block-storage StorageClass. For more information, see the Delete a Persistent Volume Claim.
    The PVC represents a Block Storage Volume. Because Block Storage Volumes have a minimum size of 10 gigabytes, the storage has been set to 10Gi. If you choose a size smaller than 10 gigabytes, the PVC defaults to 10 gigabytes. Currently the only mode supported by the Linode Block Storage CSI driver is ReadWriteOnce, meaning that it can only be connected to one Kubernetes node at a time.

  2. Create the PVC in Kubernetes, and pass in the pvc.yaml file:

    kubectl create -f pvc.yaml
    

    After a few moments the Block Storage Volume is provisioned and the Persistent Volume Claim is ready to use.

  3. Check the status of the PVC by typing the following command:

    kubectl get pvc
    

    An output similar to the following appears:

    NAME          STATUS     VOLUME                 CAPACITY     ACCESS MODES   STORAGECLASS                  AGE
    pvc-test      Bound      pvc-0e95b811652111e9    10Gi         RWO           linode-block-storage-retain   2m
    

    You can now attach the PVC to a Pod.

  4. Create a manifest file for the new Pod using the following YAML, where application is using local storage at $MOUNTPATH, pvc-test is a Persistent Volume Claim at $CSIVolumePath:

    File: new-pod.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: new-pod
      labels:
        app: application
          volumes:
          - name: application
            hostPath:
              path: $HOSTPATH
          - name: pvc-test
            persistentVolumeClaim:
              claimName: pvc-test
        ........
          volumeMounts:
          - name: application
            mountPath: $MOUNTPATH
          - name: pvc-test
            mountPath: $CSIVolumePath
  5. Create a new Pod named new-pod:

    kubectl create -f new-pod.yaml
    
  6. After a few moments the Pod should be up and running. To check the status of the Pod, type the following command:

    kubectl get pods
    

    An output similar to following appears:

    NAME       READY   STATUS    RESTARTS   AGE
    new-pod   1/1     Running   0          2m
    
  7. Connect to a shell in the new Pod, type the following command:

    kubectl exec -it new-pod -- /bin/bash
    
  8. From the shell, copy the files from local storage to the PVC. In the following command $MOUNTPATH is the location of the local storage and $CSIVolumePath is the location on the PVC:

    cp -P $MOUNTPATH $CSIVolumePath.
    
  9. Delete the new Pod that you created, and then re-create it:

     kubectl delete pod new-pod
    
     kubectl create -f new-pod.yaml
    

    You should now see that all the data is stored in the CSI Volume.

Add a new node to the cluster and drain the node

  1. Add an additional Node Pool to the LKE cluster, of a plan type and size which can accommodate the existing workloads.

  2. After the new Linodes have joined the cluster, drain any Linodes scheduled for maintenance. This causes the workloads to be rescheduled to other Linodes in the cluster. Linode recommends draining one Linode at a time in the LKE cluster, to ensure that the workloads have been rescheduled to new Linodes and are running before moving on to the next one. An example Node drain command:

     kubectl drain lke9297-11573-5f3e357cb447
    
  3. You can delete the old Node Pool or choose to keep it for after the maintenance is complete. Note, if you keep the Node Pool, you will be charged for it.

  4. When the maintenance has been completed and if you kept your previous Linodes, after they have booted you can mark them as scheduled again by using the following command:

     kubectl uncordon lke9297-11573-5f3e357cb447
    

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

This page was originally published on


Your Feedback Is Important

Let us know if this guide made it easy to get the answer you needed.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.