From Podman to Kubernetes: A Practical Integration Guide
Podman is a lightweight container engine that provides an easy-to-use command-line interface for managing images and containers. It is often used as a drop-in replacement for Docker due to the fact that, excluding Docker Swarm commands, its CLI is fully compatible with the Docker CLI.
However, Podman's capabilities extend beyond Docker compatibility, one of them being Kubernetes integration (the ability to parse and generate Kubernetes manifests). This feature offers additional convenience and flexibility, allowing you to easily deploy and manage your Podman workloads in a Kubernetes cluster or seamlessly transfer existing workloads from a Kubernetes cluster to a Podman installation.
This guide aims to demonstrate how Podman and Kubernetes can be integrated to leverage the benefits of both technologies in an efficient and practical manner. We will go through a basic introduction to pods before diving into more advanced topics and scenarios involving Kubernetes.
By the end of this article, you'll have a clear understanding of how Podman and Kubernetes can be utilized together to optimize your container management workflows and maximize the efficiency of your deployments.
Let's start with an overview of pods and how they're used in Podman.
Prerequisites
- Good Linux command-line skills.
- Basic experience with Podman and Kubernetes.
- Recent version of Podman installed on your system.
- (Optional) Docker Engine installed on your system for running the minikube examples.
Understanding pods
As you know, the concept of pods doesn't exist in all container engines. For instance, Docker doesn't support pods. Thus, many engineers are unaware of pods and their use-cases and prefer working with individual containers instead. However, with the increasing popularity of Kubernetes, it has become essential for many users to understand and integrate pods into their containerization workflows.
In Kubernetes, pods represent the smallest and simplest deployable objects, consisting of one or more containers managed as a cohesive unit. Containers within a pod can share resources like network and storage while maintaining separate filesystems and process namespaces, ensuring tighter security and better stability.
Podman aligns with this concept by allowing users to organize containers into pods. While there are differences in the implementations of Kubernetes and Podman, the core idea of managing containers as a unified entity remains consistent, making Podman pods capable of performing similar tasks.
To create a new pod, you execute:
This outputs a SHA-256 hash uniquely identifying the pod on your system:
You can issue the following command to further confirm that the pod is created successfully:
It produces a similar output:
Let's examine each column:
POD IDshows the unique identifier of the newly created pod. Upon closer examination, you'll notice that its value corresponds to the initial 12 characters of the SHA-256 hash generated by thepodman pod createcommand. You can use this ID to distinguish this pod in subsequent commands and operations.NAMEindicates the name of the newly created pod. Mostpodmancommands allow you to reference a pod by either its name or its ID interchangeably.STATUSindicates the state of the newly created pod, which can be one ofCreated,Running,Stopped,ExitedorDead. In this case, the status isCreated, which means that the pod definition has been created, but no container processes are currently actively running inside.CREATEDsimply indicates how long ago the pod was created.INFRA IDis an interesting one. It shows the identifier of the infrastructure container that the pod was created with (in this case,131ee0bcd059). The infrastructure container is what allows containers running inside a pod to share various Linux namespaces. By default, Podman orchestrates the pod in a way that allows its containers to share thenet,uts, andipcnamespaces. This allows containers within the pod to communicate with each other and re-use certain resources.# OF CONTAINERSshows the number of containers attached to the pod. A pod always starts with 1 container attached to it by default (the infrastructure container), even though its process is not started automatically, as you will see in a moment.
To examine the existing containers, type:
The output shows the infrastructure container of the pod that you just created:
Notice how the CONTAINER ID matches the INFRA ID of the created pod, and how
the first 12 characters of the container name, e22b6a695bd8-infra, match the
POD ID. These relationships are always true and make it very simple to
identify the infrastructure container for each pod on systems where several pods
might be running simultaneously.
When you create a new empty pod, the infrastructure container is prepared for
launch, but no process is actually started. Because of that, the container
initially shows as Created instead of Running, and the -a flag is required
for the podman container ps command to display it.
At this point, no namespaces have been established for the pod containers either. Type in the following command to verify this:
You will see a similar output:
The /lib/systemd/systemd --user lines display the namespaces utilized by the
service manager that was initiated when you logged in to your user account on
the given Linux machine. The catatonit -P lines, on the other hand, display
the namespaces held by the global pause process that Podman maintains while you
interact with it in rootless mode. We won't delve into the details of why these
namespaces exist in the first place, but it's important to know that they are
there and that this is typically the standard lsns output that you will
observe even before a new pod has performed any actual work.
Let's add a container to the newly created pod and see what happens. For this
experiment, we'll use the
hashicorp/http-echo image from
Docker Hub (http-echo is a small in-memory webserver commonly employed for
testing purposes):
List the containers once again:
This time both the infrastructure container and the http-echo container appear
to be Running:
The pod is listed as Running as well:
If you perform an lsns again, you'll notice several changes:
The /catatonit -P process (PID: 100589) is the main process of the
infrastructure container. As you can see, it operates inside net, mnt,
utc, ipc, pid, and cgroup namespaces that are completely different from
the root namespaces (as indicated by the systemd process). The /http-echo
process, itself, runs in separate mnt, pid and cgroup namespaces, but
shares its net, uts, and ipc namespaces with the catatonit process in
the infrastructure container.
This may not be completely obvious at first, so to confirm this, you can also run:
The output is clear:
- The
net,utsandipcnamespaces are the same as the ones held by the infrastructure container. - The
usernamespace is the same as the one held by the global pause process maintained by rootless Podman. - The
timenamespace is the roottimenamespace. - The
mnt,pidandcgroupnamespaces are unique to thehttp-echocontainer, isolating it from other containers in the pod.
This solidifies the idea that pods are essentially a group of containers capable of sharing namespaces.
As I said earlier, pods also allow you to manage containers as one cohesive unit. To see this in practice, type:
This command stops the pod and all of its associated containers. To confirm this, type:
You will see that both containers were stopped:
The pod itself was stopped as well:
When you no longer need a pod, you can remove it completely by typing:
This removes not only the pod, but also all of its associated containers.
You can verify this worked by repeating the podman pod ls and
podman container ps -a commands. You will see that neither pods nor containers
exist on your system:
With that, you have covered the basics of working with Podman pods. Now, let's explore their practical use through a real-world example.
Exploring sidecar containers
Pods are often used for adding sidecar containers to an application. Sidecar containers basically provide additional functionality and support to the main application container. This supports use cases such as configuration management, log shipping, role-based access control, and more.
To understand this better, let's explore a practical log shipping example, where a web server logs incoming HTTP requests and a log shipper forwards them to an external service for indexing. In this scenario, the application pod will include two containers:
- A Caddy container for serving web pages over HTTP.
- A Vector container configured to ship logs from your web server to Better Stack.
Create a new pod by typing:
Note how the command looks slightly different compared to your previous
invocation of podman pod create.
First, you are using the --name option to specify the name of the pod. A name
can be provided to the podman pod create command by either using the --name
option or as the very last positional argument. In other words, the command
podman pod create --publish 8080:80 example is also perfectly valid and serves
the very same purpose, but for the sake of clarity, using --name when passing
multiple command-line options is usually a lot easier to read and comprehend.
Most importantly though, you specified the additional command-line option
--publish 8080:80. As you remember, we already established that containers
within a pod share the same network namespace by default. Therefore, if you want
to receive any web traffic, you need to expose port 8080 to the host for the
entire pod. You can't do it for just an individual container, as it shares its
network namespace with the other containers in the pod, and the network
namespace is configured when the pod is originally created. By using the
--publish option, you ensure that any traffic coming to port 8080 on the
host machine is going to be forwarded to port 80 within the pod, where the
Caddy container will be listening at.
Add Caddy to the pod by typing:
Here, through the --pod example option, you are specifying that you want
Podman to attach the container to an existing pod named example (the one that
you created earlier). You're also giving the container a specific name with the
--name caddy option. Finally, docker.io/library/caddy:2.7.6-alpine specifies
the precise image that the container should be created from.
Podman fulfills the request and produces the following output:
Keep in mind that the container's assigned name doesn't apply only to the specific pod but is reserved globally.
If you try to create another container with the same name, you will get an error, even though it's not running in the same pod:
Now that the Caddy container has been created, it's interesting to see it in action. Run the following command:
Surprisingly, it turns out that the web server is currently unreachable:
Why is that? While the podman create command indeed creates the container and
attaches it to the example pod, it doesn't actually start its main process. If
you wish the process to start immediately after the container is created, you
should execute podman run instead of podman create, like this:
Currently, however, not starting the process is desired, because the default
Caddy configuration doesn't emit logs, and this leaves you without any data for
Vector to process. You can rectify this issue by modifying the default
configuration first, and only then starting the main caddy process inside the
container.
Create a new file named Caddyfile and paste the following contents, to ensure
that logs will be generated:
The log directive instructs Caddy to start emitting logs over a network socket
listening for TCP connections at localhost:9000 inside the pod. This network
socket doesn't exist yet, but it will be created by the Vector container that
you'll set up next.
Copy the updated Caddyfile to the Caddy container by issuing:
Note how you are referring to the container by the name that you specified
earlier (caddy). This is a lot easier than writing:
You're almost ready to start the main caddy process. But before that, let's
quickly customize the homepage that it's going to serve, just so it's easier to
display its contents in a terminal.
Create a new file named index.html and paste the following contents:
Then copy the index.html file to the container by issuing:
Finally, start the Caddy container:
Once again, you're using the name you specified earlier (caddy) to identify
the container. This is why choosing clear and descriptive names is so important.
Confirm that the container is running by typing:
A similar output should appear:
Try accessing the server again:
This time, the expected output appears:
Good, Caddy works and the example pod is capable of receiving HTTP requests on
port 8080 and forwarding them for processing to the Caddy container (on port
80).
You can also access your server from a web browser. Type in localhost:8080 and
a similar web page should appear:
Earlier, we mentioned that you cannot expose additional ports for a specific container after providing the initial pod definition. Let's confirm this.
Create another pod:
Now, try adding a new Caddy container to that pod, attempting to publish port
80 of the container to port 8081 on the host:
You get an error:
With this clarified, you're now ready to start setting up the Vector container.
Sign into your Better Stack account and create a new data source:
In the presented form, specify Podman tutorial as the name and Vector as the platform, then click Create source:
If all goes well, the new source will be created successfully. Copy the token
presented under the Source token field. We'll refer to this token as
<your_source_token> and use it for configuring Vector to send logs to Better
Stack.
Now create a new file named vector.yaml and paste the following contents:
This file will instruct the main process running inside the Vector container to
create a new network socket listening for TCP connections on port 9000. Caddy
will connect to this socket to emit its logs. Furthermore, this configuration
will tell Vector to forward all collected logs over to Better Stack via HTTP.
Create a new container running the
official Vector image and add it to
the example pod:
Copy the configuration file to the container:
Finally, start the container:
Verify that all containers inside the pod are running by typing:
You should see a similar output:
Now navigate back to your browser, and refresh the web page at localhost:8080
a couple of times, or issue a couple of curl localhost:8080 commands from the
terminal.
In Better Stack, navigate to Live tail:
You should see some logs collected from the Caddy container:
Your setup works. The Caddy and Vector containers run in the same network
namespace, so they can communicate over the TCP socket that vector
established.
To confirm that the network namespace is the same, run:
Both processes run in the network namespace with file descriptor 4026532340.
The rootlessport command is a port forwarder which, when running Podman in
rootless mode, facilitates the forwarding of traffic from port 80 on the host
machine to port 8080 within the network namespace held by the pod.
With all of this out of the way, let's go ahead and explore how Podman can be used for generating manifests and deploying them to a Kubernetes cluster, and how existing Kubernetes manifests can be deployed into a local Podman installation.
Make sure to leave your example pod running, as you're going to need it in the
next section.
Integrating with Kubernetes
As I mentioned earlier, Podman doesn't ship with a tool such as Docker Swarm for managing container orchestration. In a more sophisticated deployment scenario, where high availability, scalability, and fault tolerance are required and multiple hosts need to be involved, Podman users can leverage an orchestrator such as Kubernetes to handle the complexity of managing their workloads.
Podman aims to ease the transition to and from Kubernetes by exposing commands for converting existing workloads to YAML files (manifests) that Kubernetes can understand. Furthermore, users can import existing Kubernetes manifests into Podman, and Podman can parse and run these workloads locally.
If you're not familiar with what a Kubernetes manifest is, it's a file that describes the desired state of your Kubernetes cluster. It includes information about the pods, volumes, and other resources that have to be created and managed by Kubernetes.
Before proceeding with this example, you have to install minikube to be able to experiment with Kubernetes locally. If you don't know what Minikube is, it is a tool allowing you to run a single-node Kubernetes cluster on your local machine.
Follow the official Minikube installation instructions and run:
This will download a binary file named minikube-linux-amd64 into your current
directory. Use the following command to move this file to one of the directories
specified in your $PATH:
This will enable you to run the minikube command from anywhere in your
terminal.
Since the install command doesn't move, but only copies the
minikube-linux-amd64 file to the /usr/local/bin directory, you can go ahead
and remove the redundant copy by issuing:
To confirm that minikube has been installed successfully, run:
You should see a similar output:
Since the Podman driver for Minikube is still in experimental stage at the time of this writing, and this causes some networking and DNS resolution issues inside Minikube depending on the specific underlying setup, for a stable Minikube experience under Linux, you still have to use Docker.
If you don't have Docker installed, you can generally follow the official Docker installation instructions.
The examples that follow assume that Docker Engine is already installed and running on your system, which you can verify by issuing:
You should see a similar output:
You also need to make sure that your current user is added to the docker
group, so sudo isn't required for running commands against the Docker daemon:
Otherwise, Minikube will fail with a similar error:
With all of these out of the way, go ahead and start Minikube:
You should see a similar output:
With Minikube running, you can proceed to generating Kubernetes manifests from your Podman resources.
Verify that the example pod that you created earlier, along with all of its
containers, are still running:
Podman can easily build a Kubernetes manifest from a running pod through the
podman kube generate command. It expects you to provide the following
parameters:
To create the necessary manifest corresponding to your example pod, type:
In this process, you may observe the following warning, but since these particular annotations don't carry any significant meaning, you can safely disregard the message:
5827494c3cce19080da3e0804596c4f46c71c342429d8171bfa45f4188b140bf in this case
is the SHA-256 ID of the infrastructure container associated with the pod, which
is used for populating the io.kubernetes.cri-o.SandboxID/caddy and
io.kubernetes.cri-o.SandboxID/vector annotations inside the generated manifest
file. These annotations play no significant role for the deployment of this pod
to Kubernetes.
An example.yaml file should now appear in your current folder:
Let's examine its contents:
You can now run the following command to deploy this manifest to your Kubernetes cluster:
This results in a similar output:
Wait a minute or two, then type:
You should see a similar output:
This indicates that the pod is up and running inside your local Kubernetes cluster.
From the output, it appears that the Pod is ready to accept incoming HTTP
requests on port 80 through the corresponding NodePort service. In this
case, the NodePort service basically maps port 30381 of the Kubernetes node
that the pod is running on to port 80 in the pod.
However, if you type in:
You'll notice that the web server is unreachable:
That's because the minikube network is isolated from your host network. You
can run the following command to determine the URL that you can connect to:
This will output a similar table:
The address listed in the URL column is the one enabling access to your web
server.
Try again, and open http://192.168.49.2:30381 in a browser or type:
You'll see the familiar "Caddy, works!" page:
Your pod is now successfully running on Kubernetes. The changes that you did
before through podman cp are, of course, missing from the deployed images, so
Caddy defaults to displaying the "Caddy, works!" page, but essentially all it
took to deploy the application to Kubernetes was a single command.
You can remove the pod from Kubernetes by typing:
This produces a similar output:
As you can see, with only a few commands, you were able to generate a manifest for deploying your application on Kubernetes. Then, you took an existing Kubernetes manifest and ran it locally with Podman. This demonstrates the power and flexibility that Podman can provide for orchestrating your containerized workloads.
Exploring Podman Desktop
Even though using the CLI is a common way to interact with Podman, users who prefer a graphical interface have the additional option of using Podman Desktop, an open-source tool that provides a user-friendly GUI for managing containers and images and interacting with Kubernetes manifests.
Podman Desktop aims to abstract away the low level details and let users focus more on application development.
The usual way to install Podman Desktop is through its corresponding
Flatpak bundle. If you don't happen to have flatpak
installed on your system, you can install it by running:
Then add the flathub repository, as follows:
You may have to restart your session for all changes to take effect. When you're done, you can run the following command to install Podman Desktop:
Finally, to start Podman Desktop, run:
Soon after, the Podman Desktop GUI will appear:
Let's recreate the pod from our previous examples by issuing the following commands in the terminal:
Then, in Podman Desktop, navigate to Pods:
You will see the example pod listed:
Instead of having to type podman kube generate to create a Kubernetes manifest
from this pod, you can use the Generate Kube action:
A manifest appears, containing the same content that you would otherwise get by
running podman kube generate example -f example.yaml.
You may have noticed though that a Service definition is missing from that
manifest. Earlier, you requested it explicitly by passing the --service flag
to podman kube generate. At first sight, it may appear that Podman Desktop
doesn't allow you to define a Service easily. However, this isn't the case.
Go back to the Pods screen and select the Deploy to Kubernetes action:
The same YAML definition appears, but there is also an additional checkbox
allowing you to define a Service:
Scroll down a little bit, and you will see minikube listed as the Kubernetes
context. This corresponds to the minikibe cluster you created earlier:
Click Deploy and after a few moments the pod will get deployed to your local
minikube cluster:
Go back to the terminal and issue:
This outputs:
Unlike before, even though a service was created, there is no node port
available for connecting to Caddy. That's because Podman Desktop created a
service of type ClusterIP instead of NodePort.
To verify this, issue:
You'll see that the example-8080 service created by Podman Desktop has a type
of ClusterIP:
One possible way to address this problem in order to access Caddy is by patching the service to change its type:
You can now re-run:
This time, a URL appears allowing you to access Caddy:
Open the listed URL in a browser and you'll see a familiar page:
Everything appears to work correctly!
Next, let's explore how importing an existing Kubernetes manifest works with Podman Desktop. Before that, however, let's remove all pods created so far in order to start in a clean state.
Open Podman Desktop and navigate to the Pods page. You will see both the Podman
and the Kubernetes example pods appearing in the list:
Click the Delete buttons next to each pod in order to remove them from your system:
When you're done, you should see an empty list of pods:
Click on the Play Kubernetes YAML button at the top right of the Pods screen:
Press that button, and a form will appear prompting you to specify a *.yaml
file to execute:
Select the example.yaml file that you created earlier and click Play:
A message appears prompting you to wait while Podman orchestrates your containers:
After a moment, the process completes and Podman Desktop displays a JSON document indicating that the pod was started:
You can click the Done button, after which you'll see the newly created
example pod in the list of pods:
Effectively, this entire process performs the same actions as the
podman kube play example.yaml command you used earlier.
Open localhost:8080 in a browser, and it will take you to the familiar Caddy
homepage:
To remove the pod and all of its attached containers in a way similar to
podman kube down, just navigate back to the Pods page and click Delete
Pod:
A loader icon appears and soon after the pod is gone:
As you can see, Podman Desktop provides a convenient interface for managing your pods, making it easy to create, view, and delete them with just a few clicks. It also simplifies the process of working with Kubernetes and allows you to quickly perform actions like creating pods, accessing their public-facing services, and removing them when they are no longer needed. With Podman Desktop, you can effectively manage your containerized applications without the need for complex command-line instructions.
Final thoughts
The ability of Podman to integrate with Kubernetes presents a promising and flexible solution for container orchestration in modern IT environments. You can take advantage of these capabilities to seamlessly manage and deploy your containers across development, staging, and production environments.
For example, you can prototype your applications locally using Podman before eventually deploying them to a shared Kubernetes cluster for testing. You can also import externally provided Kubernetes manifests into your local Podman environments in order to explore and validate the behavior of applications without the need to run full-fledged Kubernetes clusters.
The options are endless, and both Podman CLI and Podman Desktop provide the necessary tools and flexibility for you to efficiently work with pods in various scenarios. To explore Podman further, consider visiting the official Podman website, exploring its documentation, and joining its growing community.
Thanks for reading!