In this guide, we will talk about Kubernetes Taints and Tolerations - what are they, how they work, when to use them (with examples), and how do they compare with nodeSelector, PodAffinity, and more!

kubernetes taints, kubernetes tolerations, kubernetes taints & tolerations banner

When we work on Kubernetes, it is important to understand the importance of pod-to-node scheduling. Generally, when you don’t specify pod and node scheduling, it is done automatically by Kubernetes master components. 

To specify particularly which pod has to run on which node, taints and tolerations are used. 

Let’s explore that in detail.

What are Taints in Kubernetes?

Taints are the labels you apply to the node to allow specific pods to enter and run on your tainted nodes. It’s a mechanism used to prevent any other pods that are unintended to be scheduled on these nodes. 

It is represented by key-value pairs.

kubectl taint nodes <node-name> <taint-key>=<taint-value>:<taint-effect>

node-name: replace this with the name of the node that you want to apply the taint to.

  1. 'taint-key': This is used to provide the name of the key. 
  2. 'taint-value': This is used to provide the value of the key mentioned.
  3. 'taint-effect': This comprises of three types: NoSchedule, PreferNoSchedule, or NoExecute.
  • NoSchedule: Pods that do not tolerate this taint will not be scheduled on the node.
  • PreferNoSchedule: Kubernetes will try to avoid scheduling non-tolerant pods on the node, but it's not guaranteed.
  • NoExecute: Existing pods on the node that do not tolerate this taint may be evicted.

Also Read: Nodes vs Pods vs Clusters in Kubernetes

What are Tolerations in Kubernetes?

A tainted node can only accommodate a pod that has toleration to the node. This means that, before a pod manifest is applied, you need to ensure that the “tolerations” section is added in the file which points to the taint applied to the node in order to schedule this pod to that node.

The syntax for tolerations is as follows:

  - key: <taint-key>
    operator: <operator>
    value: <taint-value>
    effect: <taint-effect>

  1. <taint-key>: The key of the taint to tolerate.
  2. <operator>: The operator to use for comparison. It can be Equal, Exists, or DoesNotExist.
  3. <taint-value>: The value of the taint to tolerate. This is optional and only used with the Equal operator.
  4. <taint-effect>: The effect of the taint to tolerate (NoSchedule, PreferNoSchedule, or NoExecute).

Where to Use Kubernetes Tolerations & Taints?

Taints can be applied to nodes with specialized hardware or software requirements. 

For instance, nodes for different environments might be tainted to ensure that only workloads that are related to a particular environment are scheduled on them. 

Another scenario is that taints can be used to isolate certain nodes from general workloads which do not belong to your critical environments. 

Also, Taints with NoExecute effect can be applied to nodes of a particular environment to undergo maintenance activities. This ensures that existing pods are evicted before maintenance activities begin.

Nodes with varying levels of resources like performance testing-related nodes, can be tainted, and corresponding tolerations can be added to the pods that require those resources. 

If certain pods need to adhere to regulatory requirements or security policies, taints can ensure they are scheduled only on nodes that meet those requirements. 

In a multi-tenant cluster, taints and tolerations can help ensure different tenant workloads are isolated from each other.

You might have a set of nodes that are reserved for scaling up when demand increases. Taints can control which pods get scheduled on these nodes during periods of higher load. 

Taints can be used to segregate different types of workloads to ensure they don't interfere with each other, such as separating development and production workloads.

Taints and tolerations can help optimize resource utilization by ensuring that pods requiring specific resources are scheduled on nodes with those resources available. 

Taints can be used during rolling updates to ensure that old and new versions of pods aren't scheduled on the same node.

Also Read: When to Use Kubectl Rollout Restart?

How to Use Kubernetes Taints & Tolerations?

It is essential to understand how to use taints and tolerations in Kubernetes. 

Here are the steps of how to create the entire environment.

Taints are applied to nodes to indicate constraints. This is typically done using the kubectl taint command. 

Here's an example:

To apply an environment taint with a “NoSchedule” effect to a node named node1, the command would be:

kubectl taint nodes node1 environment=dev:NoSchedule

Remember, pods that need to tolerate the taint must include a corresponding tolerations field in their pod specifications. 

Let’s say we want to deploy a pod named sample-pod into node1 with the taint “environment” as mentioned in the taints section, manifest for the same is as follows.

apiVersion: v1
kind: Pod
  name: sample-pod
    - name: my-container
      image: nginx:latest
    - key: environment
      operator: Equal
      value: dev
      effect: NoSchedule

Now, the “sample-pod” will be received and read by the master components and Kubernetes. As it has the expected toleration to the node, it will be assigned to node1.

In this example, the pod definition includes a tolerations field, indicating that the pod tolerates the taint with the key “environment”, value “dev”, and effect “NoSchedule”. 

This means that the pod can be scheduled onto nodes that have the corresponding taint without being affected by the taint's scheduling constraint.

Once the taints and tolerations are added, apply Pod to Cluster.

Once you've defined tolerations in your pod definition, you can apply the pod to the cluster using,

kubectl apply -f pod-definition.yaml

Now, the pod will be scheduled on nodes with the specified taints that match the tolerations in its definition.

To verify that the pod has been scheduled according to the taints and tolerations, you can describe the pod using the kubectl describe command:

kubectl describe pod example-pod

Look for the "Tolerations" section in the output. It should show the tolerations you've defined.

You can experiment with different scenarios by applying multiple taints to nodes and defining various tolerations in your pod definitions. 

This allows you to control pod placement based on different criteria.

For example, you can have nodes with taints like high-cpu and gpu, and then define tolerations in your pods to target specific nodes based on these taints.

Remember that effective use of taints and tolerations requires planning and understanding of your cluster's requirements. 

Overuse or misuse of taints and tolerations can lead to complex configurations that are difficult to manage.

There are various scenarios to consider to build efficient taint and tolerations.

Kubernetes provides built-in taints like,, etc. These taints can be used to mark nodes with certain conditions, and then you can apply tolerations to pods to handle these situations. 

For instance:

  - key: ""
    operator: "Exists"
    effect: "NoExecute"

Taints can have an effect of NoExecute, which triggers evictions of existing pods that do not tolerate the taint. This is useful during node maintenance or when you need to clear nodes for specific tasks:

kubectl taint nodes <node-name> <taint-key>=<taint-value>:NoExecute

Another important scenario is, pods can tolerate multiple taints simultaneously by defining multiple tolerations. This can be useful when nodes have multiple specific constraints:

  - key: "gpu"
    operator: "Equal"
    value: "dedicated"
    effect: "NoSchedule"
  - key: "high-cpu"
    operator: "Exists"
    effect: "NoSchedule"

Since tolerations can be used for multiple taints, it is represented as a list in the YAML file.

You can also use the PreferNoSchedule effect to indicate that pods prefer not to be scheduled on nodes with certain taints, but they are not strictly forbidden from doing so. 

This can be helpful when you want to keep specific resources available for certain workloads:

  - key: "gpu"
    operator: "Equal"
    value: "dedicated"
    effect: "PreferNoSchedule"

There is another scenario that needs to be used carefully. You can use the operator Exists with an empty key and value to tolerate any taint. 

This can be useful in scenarios where you have nodes with varying taints, and you want certain pods to be flexible in their scheduling:

  - operator: "Exists"

Combine nodeAffinity rules with tolerations to create complex scenarios. 

For instance, you can ensure that a pod tolerates a taint but only gets scheduled on nodes with specific labels:

        - matchExpressions:
            - key: "custom-label"
              operator: "In"
                - "special"
    - key: "gpu"
      operator: "Equal"
      value: "dedicated"
      effect: "NoSchedule"

Best Practices for Kubernetes Taints & Tolerations

When using Kubernetes taints and tolerations, it's important to follow best practices to ensure effective cluster management. 

Here are some best practices for Kubernetes taints & tolerations to consider:

Plan Carefully

Before applying taints or defining tolerations, plan how you want to segregate workloads or allocate resources. Overuse of taints and tolerations can lead to complex configurations.

Label Nodes Clearly and Descriptively

Label nodes with clear and meaningful labels that reflect their characteristics. This will help you define tolerations accurately and make better scheduling decisions. 

Apply taints with meaningful keys and values.

Document Taints and Tolerations

Document the taints and tolerations you apply to nodes and pods. This documentation will be helpful for onboarding new team members and for troubleshooting.

Also Read: Kubeadm Tutorial

Avoid Tainting All Nodes

It is generally not necessary to apply taints on all nodes as it can limit flexibility and reduce the cluster's ability to schedule pods.

Separate Critical Workloads

Use taints and tolerations to separate critical workloads from non-critical ones, ensuring resource availability and reducing the risk of interference.

Avoid Overlapping Tolerations

Be cautious when applying overlapping tolerations that could lead to unexpected pod placement.

Use Proper Naming Conventions

As mentioned earlier, there are multiple different use cases for which you will use taints. Using names relevant to the scenario will help you maintain the infrastructure better. 

Use consistent and clear naming conventions for taints and tolerations to improve maintainability and readability.

Also Read: Kubectl Commands Cheat Sheet

Kubernetes Taints & Tolerations vs Node Affinity

Taints & Tolerations can lead to complex configurations with multiple taints and tolerations.

Node Affinity is simpler for basic scenarios involving label-based placement.

Taints & Tolerations are more suitable for scenarios where nodes have specialized hardware or where you want to enforce strict constraints.

Node Affinity is more suitable for scenarios where you want to distribute pods based on node attributes or balance workloads.

Also Read: How to Use Just One Load Balancer for Multiple Apps in Kubernetes?

Kubernetes Taints & Tolerations vs nodeSelector

Taints & Tolerations are Node-controlled. Nodes apply taints, and pods define tolerations.

nodeSelector is Pod-controlled. Pods define node labels in their specifications.

Taints are fine-grained control over individual nodes and specific constraints. Whereas nodeSelector has Coarse-grained control based on node labels.

Taints & Tolerations can lead to complex configurations with multiple taints and tolerations.

But nodeSelectors are generally simpler for basic scenarios involving label-based placement.

Taints are more suitable for scenarios where you need to enforce strict constraints or dedicate nodes for specialized workloads.

Whereas nodeSelector is more suitable for scenarios where you want to distribute pods based on node attributes, like geography or hardware type.

In practice, you can combine taints & tolerations or other mechanisms for better scheduling decisions.

For example, you might use taints to enforce node specialization and nodeSelector to further refine which specialized nodes are chosen for specific pods.

The choice between taints & tolerations and nodeSelector depends on your specific use case, workload requirements, and the desired level of control over pod placement.

Frequently Asked Questions

What is the difference between taints and tolerations and affinity?

Taints and tolerations are mechanisms in Kubernetes used to control where pods are scheduled on nodes. On the other hand, affinity in Kubernetes involves using nodeAffinity rules to influence pod scheduling based on node attributes such as labels. 

What is the difference between node affinity and pod affinity in Kubernetes?

Node affinity is a property set in a pod's specification that guides scheduling based on node labels. It allows you to specify conditions that nodes must satisfy for a pod to be scheduled on them. Pod affinity, on the other hand, operates at the pod level. It defines rules that determine how pods are scheduled with respect to other pods.

What is the difference between nodeSelector and taints & tolerations?

nodeSelector is a way to specify node labels in a pod's definition, directing the scheduler to place the pod on nodes with matching labels. Taints & tolerations, however, allow node-level constraints to be set by node administrators (taints) and overridden by pod owners (tolerations), controlling where pods can be scheduled based on node attributes.

What is the difference between taint and cordon in Kubernetes?

In Kubernetes, a taint is a label applied to a node that restricts which pods can be scheduled onto it. It affects pod scheduling decisions. On the other hand, "cordon" is a command used to mark a node as unschedulable, preventing new pods from being scheduled on that node while existing pods continue to run.

What is the difference between pod affinity and anti-affinity?

Pod affinity in Kubernetes is used to guide the scheduling of pods based on the presence of other pods. It ensures that pods are scheduled on nodes where specific pods are already running. On the contrary, pod anti-affinity is used to influence pod scheduling to avoid colocating pods with certain pods, enhancing high availability by spreading pods across different nodes.