Getting Started with Kind for Local Kubernetes Development
Kind was originally created to facilitate Kubernetes' own conformance testing, but it has evolved into a powerful tool for local development and continuous integration workflows. By using Docker containers to simulate Kubernetes nodes, Kind offers a remarkably lightweight yet fully functional Kubernetes environment.
The name "Kind" stands for "Kubernetes IN Docker" which aptly describes how it works. Instead of virtualizing entire machines, Kind uses Docker containers to represent Kubernetes nodes. This approach dramatically reduces resource consumption while preserving the essential behavior of a real Kubernetes cluster.
For developers working on Kubernetes-native applications, Kind provides an ideal environment to test deployments, debug issues, and validate changes before committing them to production environments.
In this article, we'll discuss setting up a local Kubernetes development environment with Kind.
Prerequisites
Before diving into Kind, you'll need to ensure your development environment meets the necessary requirements:
Docker: Since Kind runs nodes as containers, Docker is essential. You'll need a recent version of Docker installed and running on your system.
kubectl: The Kubernetes command-line tool is necessary for interacting with your Kind clusters after they're created.
Go: While not strictly required for using Kind, it's needed if you want to build Kind from source or contribute to its development.
For the optimal experience, ensure that Docker has enough resources allocated. On Docker Desktop, you can adjust resource limits in the application settings.
Installation
Installing Kind is straightforward across all supported platforms. Let's explore the various installation methods.
The most universal way to install Kind is through its pre-built binaries, which are available for Linux, macOS, and Windows.
For Linux and macOS users, you can download and install Kind with a simple command:
This command downloads the appropriate Kind binary, makes it executable, and moves it to a directory in your PATH for easy access.
For macOS users with Homebrew, the installation is even simpler:
Windows users can install Kind using Chocolatey:
Or with PowerShell:
Installing kubectl
Since you'll need kubectl to interact with your Kind clusters, make sure it's
installed on your system.
On Linux, you can install kubectl via the package manager:
On macOS with Homebrew:
On Windows with Chocolatey:
Alternatively, you can download kubectl directly from the Kubernetes release
page for any platform.
Verifying installation
After installing Kind and kubectl, verify that they're working correctly by
checking their versions:
This should output something like:
Similarly, check kubectl:
With both tools successfully installed, you're ready to create your first Kind cluster.
Creating your first cluster
Creating a Kind cluster is remarkably simple, especially compared to setting up traditional Kubernetes environments. Let's explore the basic creation process and some of the available configuration options.
The simplest way to create a Kind cluster is with the default configuration:
This command creates a single-node Kubernetes cluster running inside a Docker container. The output will look something like this:
This output tells you that Kind has:
- Downloaded the node image (if not already present)
- Prepared the node container
- Created the necessary configuration
- Started the control plane components
- Installed the Container Network Interface (CNI)
- Set up the default storage class
- Configured
kubectlto use this cluster
The cluster name defaults to "kind" if not specified. You can verify that your cluster is running with:
This should show information about your Kubernetes control plane and CoreDNS service.
Multi-node clusters
One of Kind's strengths is its ability to create multi-node clusters, which more closely resemble production environments. To create a multi-node cluster, you'll need to define a configuration file.
Create a file named multi-node.yaml with the following content:
This configuration defines a cluster with one control plane node and two worker nodes. To create this cluster:
After the cluster is created, you can verify that all nodes are running:
You should see output similar to this:
Configuration options
Kind offers numerous configuration options to customize your clusters. Let's explore some of the most useful ones.
You can specify which Kubernetes version to use by defining the node image in your configuration file:
This configuration creates a cluster using Kubernetes v1.25.11. Kind maintains images for various Kubernetes versions, which you can find in the Kind documentation.
Port mapping
To expose services in your Kind cluster to the host system, you can configure port mappings:
This configuration maps port 30080 in the control plane node to port 8080 on your host. This is particularly useful for testing NodePort and Ingress services.
Extra mounts
You can also mount files or directories from your host into the Kind nodes:
This configuration mounts /path/to/my/files from your host to /files in the
control plane node, allowing you to easily share configuration files or other
resources with your Kind cluster.
Working with the Kubernetes dashboard
The Kubernetes Dashboard provides a web-based UI for managing and monitoring your cluster. While Kind doesn't include the dashboard by default, you can easily deploy it to enhance your development experience.
Deploying the dashboard
To deploy the Kubernetes Dashboard to your Kind cluster, you can use the official manifest:
This command creates the necessary resources for the dashboard in a dedicated
namespace called kubernetes-dashboard.
Creating a dashboard user
For security reasons, you'll need to create a user to access the dashboard.
Create a file named dashboard-adminuser.yaml with the following content:
Apply this configuration:
Accessing the dashboard
To access the dashboard, you need to create a secure channel to your cluster
using kubectl's proxy feature:
This command starts a proxy server on localhost, allowing you to access the dashboard at:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
To log in, you'll need a token. Generate one with:
Copy the generated token and paste it into the dashboard login page.
Once logged in, you'll see the main dashboard interface, which provides a comprehensive view of your cluster resources.
Deploying applications to Kind
With your Kind cluster up and running, you can now deploy applications to test and validate their behavior in a Kubernetes environment.
Let's start with a simple deployment of NGINX. Create a file named
nginx-deployment.yaml:
This configuration creates a deployment with three NGINX pods and a NodePort service that exposes them on port 30080. Apply this configuration:
Verify that the deployment is running:
You should see output like:
And check that the service is created:
Output:
If you created your cluster with port mapping as shown earlier, you should now be able to access the NGINX service at http://localhost:8080.
Using Ingress resources
For more sophisticated routing, you can use Kubernetes Ingress resources. First, you'll need to enable the Ingress controller in your Kind cluster.
Create a new configuration file named ingress-cluster.yaml:
Create a new cluster with this configuration:
Install the NGINX Ingress controller with specific patches for Kind:
Wait for the Ingress controller to be ready:
Now, deploy a sample application:
Apply this configuration:
Finally, create an Ingress resource to route traffic to your application:
Apply the Ingress configuration:
Now you should be able to access your application at http://localhost/hello.
Volume mounting and persistent storage
For applications that require persistent storage, Kind supports Persistent Volumes and Persistent Volume Claims.
First, create a PVC:
Apply this configuration:
Now create a deployment that uses this PVC:
Apply this configuration:
Kind's built-in storage provisioner will automatically create a Persistent Volume to satisfy the PVC, and your application will have access to persistent storage that survives pod restarts.
Managing Kind resources
One of Kind's strengths is its ability to run multiple clusters simultaneously, which is useful for testing multi-cluster scenarios or isolating different projects.
To list all your Kind clusters:
You can switch between clusters by specifying the context in your kubectl
commands:
Or by providing the context flag:
When you're done with a cluster, you can delete it:
Loading Docker images into Kind
One common development workflow involves building Docker images locally and using them in your Kind cluster. Since Kind runs in its own Docker network, you need to explicitly load images into the Kind cluster.
Build your image normally:
Then load it into your Kind cluster:
You can specify which cluster to load the image into:
Now you can create deployments that reference this image:
Note the imagePullPolicy: Never, which tells Kubernetes to use the local image
rather than trying to pull it from a registry.
Export logs from cluster
When troubleshooting, it's often useful to collect logs from your Kind cluster. Kind provides a command to export logs to a directory:
This command collects logs from all nodes in the cluster and saves them to the specified directory. You can specify which cluster to collect logs from:
The collected logs include:
- Kubernetes component logs (API server, scheduler, etc.)
- Container runtime logs
- Node logs
- System logs
These logs can be invaluable for diagnosing issues in your cluster or the applications running on it.
Advanced Kind features
Kind offers several advanced features that make it a powerful tool for Kubernetes development and testing.
Custom node configurations
For advanced scenarios, you might need to customize the configuration of
individual nodes. Kind allows you to apply kubeadm configuration patches to
control various aspects of node initialization.
Here's an example of a custom node configuration:
This configuration customizes resource reservation on the control plane and eviction thresholds on the worker node. These sorts of customizations allow you to test how your applications behave under specific node configurations.
Integration with container registries
For more complex development workflows, you might want to use a container registry within your Kind cluster. You can deploy a local registry and configure Kind to use it:
First, create a Docker network for the registry and Kind to communicate:
Run a local registry:
Then create a Kind cluster configured to use this registry:
Create the cluster:
Connect the Kind cluster to the registry network:
Now you can push images to your local registry:
And use them in your deployments:
This setup allows you to test the full CI/CD workflow, including image pushing and pulling, within your local development environment.
Testing Kubernetes operators
Kind provides an excellent environment for testing Kubernetes operators—software extensions that use custom resources to manage applications and their components.
To test an operator in Kind, first create a cluster:
Install the operator SDK's OLM (Operator Lifecycle Manager):
Deploy your operator:
Create custom resources for your operator to manage:
You can then monitor the operator's behavior and verify that it correctly manages the custom resources in response to changes.
This approach allows operator developers to rapidly iterate on their code without needing to deploy to a remote cluster for each test cycle.
Final thoughts
As Kubernetes continues to evolve as the industry standard for container orchestration, Kind bridges the crucial gap between local development and production deployment, cementing its place as an essential tool in the modern developer's toolkit.
Thanks for reading!