Skip to content
Go back

Kubernetes Basics

Published:

container orchestration!


The Container Journey: From One to Many

Imagine you’re building a new application. You start with a single container – a lightweight, portable package that holds everything your application needs to run (code, libraries, dependencies, etc.). It’s like having a self-contained mini-computer for your app. For a while, managing one container is simple.

However, as your application grows, you’ll need more containers. Maybe you’re adding new features, deploying to different regions for better availability, or scaling up to handle more users. Soon, that one container turns into tens, hundreds, or even thousands!

This is where the challenge begins. Manually managing a large number of containers becomes incredibly complex and time-consuming. Think about:

This is where container orchestration steps in.


What is Container Orchestration?

Container orchestration is an automated process that manages the entire lifecycle of containerized applications. It handles the deployment, management, scaling, networking, and availability of your containers. In simpler terms, it’s like having a highly intelligent conductor for your orchestra of containers, ensuring everything runs smoothly and harmoniously.

Why is Container Orchestration Necessary?

In today’s fast-paced, dynamic environments, container orchestration is crucial because it:

Where Can You Implement It?

Container orchestration isn’t limited to a specific environment. You can implement it:

It’s also often a critical component of an organization’s Security Orchestration, Automation, and Response (SOAR) requirements, helping to automate security processes and responses.


Key Features of Container Orchestration Tools

Container orchestration tools come with a rich set of features designed to automate and simplify container management:

How Does It Work Under the Hood?

Container orchestration uses configuration files, typically written in YAML or JSON. These files describe:

Based on these files, the orchestration tool automatically:

This automation significantly enhances productivity and makes scaling much easier.


Several powerful tools are available for container orchestration, each with its strengths:

Kubernetes has a vast and expanding ecosystem of open-source tools and is widely supported by leading cloud providers, many of whom offer fully managed Kubernetes services.


The Benefits of Container Orchestration

Container orchestration isn’t just a technical solution; it directly helps businesses achieve their goals and increase profitability through automation. Here are the key benefits for developers and administrators:


Wrapping Up

Managing a handful of containers might be easy, but when you deal with hundreds or thousands, it quickly becomes an overwhelming task. Container orchestration automates the entire container lifecycle, from deployment and scaling to networking and self-healing. This results in:

Tools like Marathon, Nomad, Docker Swarm, and especially Kubernetes, are at the forefront of this technology. By adopting container orchestration, organizations can significantly improve productivity, reduce costs, enhance security, and achieve greater agility in their application development and deployment processes.


Introduction to Kubernetes.


Welcome to the World of Kubernetes (K8s)!

Kubernetes, often affectionately called K8s (because there are 8 letters between the ‘K’ and the ‘s’), is a revolutionary open-source system. Its primary purpose is to automate the deployment, scaling, and management of containerized applications.

Think of it as the ultimate orchestrator for your containers. If your applications are a symphony of individual instruments (containers), Kubernetes is the conductor ensuring every instrument plays in harmony, at the right time, and with the right volume.

The Rise of Kubernetes

Kubernetes was initially developed as a project by Google, leveraging their vast experience in running containerized applications at scale. Today, it’s maintained by the Cloud Native Computing Foundation (CNCF), a vendor-neutral organization fostering the adoption of cloud-native technologies.

Its widespread adoption has firmly established it as the de facto choice for container orchestration. This means if you’re working with containers at scale, you’re very likely to encounter Kubernetes.

Portability and Flexibility

One of Kubernetes’ most significant strengths is its portability. You can run Kubernetes clusters:

This flexibility allows organizations to avoid vendor lock-in and choose the infrastructure that best suits their needs.

Declarative Management: The “Desired State”

Kubernetes operates on a principle called declarative management. Instead of telling Kubernetes how to do something step-by-step, you declare the desired state of your application. For example, you might say, “I want three replicas of my web application running, accessible on port 80.”

Kubernetes then constantly monitors the actual state of your cluster and automatically performs the necessary operations to achieve and maintain that desired state. If a container crashes, Kubernetes will automatically restart it. If traffic increases, it can scale up your application.


What Kubernetes Is NOT (Important Distinctions)

To truly understand Kubernetes, it’s helpful to clarify what it’s not:


Essential Kubernetes Concepts

Understanding these core concepts is fundamental to working with Kubernetes:


Powerful Kubernetes Capabilities

Kubernetes offers a wide array of capabilities that automate complex tasks:


The Vibrant Kubernetes Ecosystem

The Kubernetes ecosystem is vast and rapidly growing, encompassing a wide range of services, support, and tools from various providers. Running containerized applications at scale often requires more than just Kubernetes itself; it integrates with other specialized tools.

Here’s a glimpse into the different categories of providers within the Kubernetes ecosystem:


In Conclusion

Kubernetes is a highly portable, horizontally scalable, open-source container orchestration system that fundamentally automates deployment and simplifies the management of your applications.

Its core concepts revolve around:

Kubernetes’ powerful capabilities include:

The thriving Kubernetes ecosystem provides a vast array of complementary tools and services from various providers, ensuring that you have the support and resources needed to build, deploy, and manage your containerized applications effectively.


Kubernetes architecture


Understanding Kubernetes Architecture: The Brains and the Brawn

A deployment of Kubernetes is called a Kubernetes cluster. At its core, a Kubernetes cluster is a collection of machines (nodes) that work together to run your containerized applications.

Every Kubernetes cluster is made up of two main logical components:

  1. The Control Plane (The Brains): This is the “master” part of the cluster. It’s responsible for making global decisions about the cluster and detecting/responding to events. Think of it as the cluster’s operating system or central nervous system.
  2. Worker Nodes (The Brawn): These are the “worker” machines where your actual applications (containers inside Pods) run.

Let’s break down each of these components.


The Control Plane: The Mastermind of Your Cluster

The control plane is the core set of components that maintain the desired state of your Kubernetes cluster. It continuously monitors the cluster and, if the actual state doesn’t match the desired state you’ve defined, it takes action to correct it.

Analogy: Imagine a thermostat. You set the desired temperature (your desired state). The thermostat (control plane) constantly monitors the room temperature (actual state) and, if it’s too hot or cold, it turns on the heating or cooling system (takes action) to bring the room to your desired temperature.

Examples of control plane decisions and actions:

The machines running the control plane components are often referred to as master nodes, though Kubernetes can run its control plane across multiple machines for high availability.

Key Components of the Kubernetes Control Plane:

  1. Kube-API-Server (The Front-End and Communication Hub)

    • What it does: This is the primary interface to the Kubernetes cluster. It exposes the Kubernetes API, which is how all internal and external communication within the cluster happens.
    • How it works: When you (or another component) want to view or change the state of the cluster (e.g., deploy an application, check a Pod’s status), you send a command to the API server.
    • Scalability: The kube-api-server is designed to be highly scalable. You can run multiple instances of it and load balance traffic between them to ensure high availability and performance.
  2. Etcd (The Cluster Database)

    • What it does: etcd is a highly available, distributed key-value store. It serves as the single source of truth for your entire Kubernetes cluster.
    • How it works: All cluster data, including the desired state of your applications, network configurations, and secrets, is stored here. When you tell Kubernetes to deploy your application, that deployment configuration immediately goes into etcd. The entire Kubernetes system works to bring the actual state of the cluster in line with the state defined in etcd.
  3. Kube-Scheduler (The Workload Distributor)

    • What it does: The kube-scheduler is responsible for assigning newly created Pods (your application workloads) to available worker nodes.
    • How it works: It looks at various factors like resource requirements (CPU, memory), hardware constraints, policy constraints, data locality, inter-workload interference, and deadlines to select the “most optimal” node for a Pod to run on.
  4. Kube-Controller-Manager (The State Enforcer)

    • What it does: The kube-controller-manager runs various controller processes. These controllers continuously monitor the cluster’s actual state via the API server and compare it to the desired state stored in etcd.
    • How it works: If there’s a discrepancy, a controller takes action to bring the actual state closer to the desired state. For example:
      • A replication controller ensures a specified number of Pod replicas are always running.
      • A node controller monitors node health.
      • A service account controller manages user accounts for processes.
  5. Cloud-Controller-Manager (For Cloud Integrations)

    • What it does: This component runs controllers that specifically interact with the underlying cloud provider’s APIs.
    • How it works: It allows Kubernetes to integrate with cloud-specific features like:
      • Node management: Creating, deleting, or updating cloud instances.
      • Route management: Setting up network routes for containers.
      • Load balancer management: Provisioning cloud load balancers for Kubernetes Services.
    • Why it’s separate: Kubernetes aims to be cloud-agnostic. By having the cloud-controller-manager as a separate component, both Kubernetes and cloud providers can evolve independently without tightly coupling their codebases.

Worker Nodes: Where Your Applications Live

Worker nodes are the machines where your user applications actually run. These nodes can be virtual machines (VMs) in a cloud environment or physical servers on-premises. They are managed by the control plane and contain the necessary services to run and connect your containerized applications.

Key Components of a Kubernetes Worker Node:

  1. Pods (The Smallest Deployable Unit)

    • What they are: As mentioned before, Pods are the smallest deployable compute object in Kubernetes. Each Pod represents a single instance of a running process.
    • What they contain: A Pod typically contains one or more containers (e.g., your application code, a sidecar for logging). Containers within the same Pod share the Pod’s network namespace, storage volumes, and can communicate with each other directly.
  2. Kubelet (The Node Agent)

    • What it does: The kubelet is the most important agent running on every worker node. It’s the primary “worker” that communicates with the kube-api-server.
    • How it works:
      • It receives Pod specifications (instructions on which containers to run and how) from the API server.
      • It ensures that the containers specified in the Pod are running and healthy on its node.
      • It continuously reports the health and status of the Pods and the node itself back to the control plane.
      • When it needs to start a container, the kubelet interacts with the container runtime.
  3. Container Runtime (Running Your Containers)

    • What it does: The container runtime is the software responsible for downloading container images and running the containers on the worker node.
    • How it works: Kubernetes uses a Container Runtime Interface (CRI), which allows for pluggability. This means Kubernetes can work with various container runtimes without being tied to a single one.
    • Examples: While Docker is historically well-known, other popular and often more lightweight container runtimes include containerd, Podman, and CRI-O.
  4. Kube-Proxy (The Network Proxy)

    • What it does: The kube-proxy is a network proxy that runs on each node in the cluster. It’s essential for enabling network communication within your Kubernetes cluster.
    • How it works: It maintains network rules (using iptables or ipvs on Linux) on the node that allow network communication to your Pods, both from inside and outside the cluster. When you define a Kubernetes Service, kube-proxy ensures that traffic directed to that Service’s IP address is correctly routed to the appropriate Pods, often involving load balancing across multiple Pods.

Putting It All Together

In essence, the Kubernetes control plane is the brain that makes all the decisions, orchestrates resources, and maintains the cluster’s desired state. The worker nodes are the muscle, executing the workloads and running your actual applications, constantly reporting their status back to the control plane.


Kubernetes objects.


Welcome to Kubernetes Objects Part 1: The Building Blocks of Your Cluster

In the world of Kubernetes, everything you manage and control is represented as a Kubernetes Object. These objects are like the fundamental nouns in the Kubernetes language – they describe what you want your cluster to look like.

What is an “Object” in the Computing World?

Before diving into Kubernetes objects, let’s quickly review the general concept of an “object” in computing:

Kubernetes Objects: Persistent Entities

Kubernetes objects are persistent entities that you use to represent the state of your cluster. They describe what you want your applications to look like, how they should run, and the resources they need.

Examples of Kubernetes objects include:

The Two Main Fields of a Kubernetes Object: spec and status

Every Kubernetes object you interact with fundamentally consists of two key fields:

  1. spec (Specification - Desired State):

    • This is provided by you, the user.
    • It defines the desired state of the object. You tell Kubernetes what you want to achieve.
    • Example: For a Pod, the spec might include the container image to run, the ports to expose, and resource limits. For a Deployment, it defines how many replicas you want.
  2. status (Current State):

    • This is provided and updated by Kubernetes.
    • It describes the current state of the object in the cluster.
    • Example: For a Pod, the status might show whether it’s Running, Pending, or Failed. For a Deployment, it shows how many replicas are currently ready.

The Core Principle: Kubernetes continuously works towards matching the current state (status) to your desired state (spec). This is the power of declarative management. You declare what you want, and Kubernetes works tirelessly to make it happen.

Interacting with Kubernetes Objects

You interact with Kubernetes objects primarily through the Kubernetes API. You can use:


Organizing and Grouping Objects

As your cluster grows, you’ll have many objects. Kubernetes provides mechanisms to organize and group them:

Labels and Label Selectors

Namespaces


Basic Kubernetes Objects and Their Relationships

Now, let’s look at some of the most fundamental Kubernetes objects:

Pods: The Smallest Deployable Unit

ReplicaSets: Ensuring a Stable Number of Pods

Deployments: The Go-To for Managing Applications


Connecting the Dots: How Objects Relate


Conclusion: The Building Blocks of Your Kubernetes World


Welcome to Kubernetes Objects Part 2: Connectivity and Specialized Workloads

In Part 1, we covered the foundational objects like Pods, ReplicaSets, and Deployments, which are all about running your applications. Now, we’ll explore how these applications communicate within and outside the cluster, and how Kubernetes handles more specific types of workloads.


Services: The Stable Front for Your Pods

A Service is a fundamental Kubernetes object that acts as a logical abstraction for a set of Pods in a cluster. It provides a stable network endpoint (an IP address and DNS name) and policies for accessing those Pods, effectively acting as a load balancer across them.

Why is a Service Needed?

Pods in Kubernetes are inherently ephemeral and volatile:

This volatility creates a problem: how do other applications or external users consistently find and communicate with your application if its IP address keeps changing?

The Solution: A Service!

A Service solves this discoverability issue by:

Service Properties:

Service Types: Controlling Access

Kubernetes offers four main types of Services, each designed for different access patterns:

  1. ClusterIP (Default and Most Common)

    • Purpose: Exposes the Service on a cluster-internal IP address.
    • Access: Makes the Service only reachable from within the cluster. This is ideal for inter-service communication (e.g., your frontend application talking to your backend API, or your application talking to a database running in the cluster).
    • Behavior: Kubernetes assigns a unique ClusterIP address. You can optionally set a specific ClusterIP in the Service definition.
  2. NodePort (Exposing on Worker Nodes)

    • Purpose: An extension of ClusterIP. It exposes the Service on each Worker Node’s IP address at a static port (the “NodePort”).
    • Access: Makes the Service reachable from outside the cluster via NodeIP:NodePort.
    • Behavior: When you create a NodePort Service, Kubernetes automatically creates a ClusterIP Service internally. The NodePort Service then routes incoming requests on the specified static port to the ClusterIP Service, and subsequently to your Pods.
    • Security Note: Generally not recommended for production use for internet-facing applications due to security concerns (exposes services on every node’s IP, often on a high port range) and lack of advanced load balancing features. It’s more commonly used for development, testing, or specific internal scenarios.
  3. LoadBalancer (Cloud Provider Integration)

    • Purpose: An extension of NodePort. It exposes the Service to the Internet by provisioning an external load balancer from your cloud provider.
    • Access: Makes the Service directly accessible from the Internet via a dedicated external IP address (often a public IP).
    • Behavior: When you create a LoadBalancer Service, Kubernetes automatically creates a NodePort and a ClusterIP Service underneath it. The cloud provider’s external load balancer is then configured to direct traffic to the NodePorts on your cluster’s worker nodes, which then route to your Pods.
    • Cost: While highly convenient, external load balancers can be expensive and are managed by your cloud provider outside the Kubernetes cluster’s direct control.
  4. ExternalName (Mapping to External DNS)

    • Purpose: Maps the Service to a DNS name, not to a selector of Pods within the cluster.
    • Access: Allows Pods within your cluster to access an external service (e.g., a database hosted outside Kubernetes, or another application in a different cluster) using a Kubernetes Service name.
    • Behavior: This Service type returns a CNAME record with the value of the spec.externalName parameter. It does not act as a proxy or load balancer.
    • Use Cases:
      • Creating a Service that represents external storage.
      • Enabling Pods from different namespaces to talk to each other without needing to know the other Pod’s internal ClusterIP or DNS name.

Advanced Kubernetes Objects for Specific Workloads

Beyond basic Services, Kubernetes offers specialized objects for various workload patterns:

Ingress: Advanced External Access and Routing

DaemonSet: Ensuring a Pod on Every Node

StatefulSet: Managing Stateful Applications

Job: For Batch Processing and One-Time Tasks


Conclusion: Expanding Your Kubernetes Toolkit


It’s time to get hands-on with Kubernetes! The primary tool you’ll use is kubectl, the command-line interface. Let’s break down what kubectl is, its command structure, different ways to use it, and some common commands.


Mastering kubectl: Your Command Line Gateway to Kubernetes

kubectl (pronounced “kube-control,” “kube-c-t-l,” or “kube-cud-dle”) is the Kubernetes command-line interface (CLI). It’s an indispensable tool for anyone working with Kubernetes clusters.

What can kubectl do for you?

kubectl allows you to:

The kubectl Command Structure

All kubectl commands follow a consistent structure. Keeping each component in order is crucial for successful execution:

kubectl [command] [type] [name] [flags]

Let’s break down each part:


Three Ways to Use kubectl: Imperative, Imperative Object, and Declarative

Kubernetes offers three distinct approaches for managing objects using kubectl, each with its own features, advantages, and disadvantages. Understanding these will help you choose the right method for different scenarios.


1. Imperative Commands (The Quick and Direct Way)

What they are: Imperative commands allow you to create, update, and delete live objects directly by specifying operations and arguments on the command line.

Structure Example: kubectl run <pod-name> --image=<image-name>

Features and Advantages:

Disadvantages:

Best for: Development and test environments, quick debugging, or initial explorations. Not recommended for production.


2. Imperative Object Configuration (Commands + Files)

What it is: With imperative object configuration, you still specify the operation (create, delete, replace) directly on the command line, but you point kubectl to one or more configuration files (YAML or JSON) that contain the full definition of the objects.

Structure Example: kubectl create -f <filename.yaml> or kubectl delete -f <filename.json>

Features and Advantages:

Disadvantages:

Best for: Environments where version control and reproducibility are important, but you still want explicit control over operations. Better than imperative commands for shared development.


3. Declarative Object Configuration (The Desired State)

What it is: This is the recommended and most powerful approach, especially for production systems. With declarative object configuration, you store configuration data in files, just like with imperative object configuration. However, you use a single command, kubectl apply, to manage your objects.

Structure Example: kubectl apply -f <filename.yaml> or kubectl apply -f <directory-of-files>

Features and Advantages:

Disadvantages:

Best for: Production systems, CI/CD pipelines, collaborative development, and managing complex Kubernetes deployments. This is the gold standard.


Commonly Used kubectl Commands

Here are some essential kubectl commands that you’ll use frequently, with examples demonstrating their structure and purpose:

Example Walkthrough: Deploying with kubectl apply

Let’s say you have a nginx-deployment.yaml file like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
  1. Create the Deployment (declarative):

    kubectl apply -f nginx-deployment.yaml

    Output: deployment.apps/my-nginx-deployment created

  2. Verify the Deployment and Pods:

    kubectl get deployment my-nginx-deployment

    Output will show:

    NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
    my-nginx-deployment   3/3     3            3           <some-age>

    This confirms that your Deployment is running with 3 ready replicas.

    You can also check the Pods:

    kubectl get pods -l app=nginx

    Output will list the three Nginx Pods.


Conclusion

kubectl is your primary tool for interacting with a Kubernetes cluster. You’ve learned:

To explore all kubectl commands and their detailed options, always refer to the official Kubernetes documentation at https://kubernetes.io/docs/reference/kubectl/.


Let’s walk through the process of creating a Kubernetes Service using the Nginx image. This involves two core Kubernetes objects: a Deployment (to manage your Nginx Pods) and a Service (to expose Nginx to traffic).


Task 1: Creating a Kubernetes Service using Nginx

Goal: To run Nginx as a web server inside your Kubernetes cluster and make it accessible.

Nginx is a highly performant and stable open-source web server, known for its efficiency and ability to act as a reverse proxy, load balancer, and HTTP cache.

Here’s how you can achieve this by creating a Deployment and then exposing it as a Service:


Step 1: Create a Deployment named my-deployment1 using the nginx image.

A Deployment is a powerful Kubernetes object that manages a set of replicated Pods. It ensures that a specified number of Pod replicas are always running and handles updates gracefully (like rolling updates).

We’ll use an imperative command here for simplicity in this example, as it’s a quick way to get things running.

Command:

kubectl create deployment my-deployment1 --image=nginx

Explanation of the Command:

What this command does:

This command creates a Deployment named my-deployment1. Under the hood, this Deployment will automatically create a ReplicaSet (which ensures a desired number of Pods are running) and one or more Pods that run the Nginx container. By default, kubectl create deployment creates a Deployment with one replica.


Step 2: Expose the my-deployment1 Deployment as a Service.

Now that our Nginx Pods are running (managed by my-deployment1), we need a way for other applications (or external users) to access them. This is where a Service comes in. We’ll use a NodePort Service type to make it accessible from outside the cluster for demonstration.

Command:

kubectl expose deployment my-deployment1 --port=80 --type=NodePort --name=my-service1

Explanation of the Command:

What this command does:

This command creates a Service named my-service1. This Service will automatically:

  1. Get a ClusterIP (internal IP address) that allows other Pods within the cluster to reach it.
  2. Expose itself on a dynamically assigned NodePort on all your worker nodes, making it accessible from outside the cluster.
  3. Route incoming traffic on port 80 (the Service port) to the target port 80 of the Nginx containers running inside the Pods (since Nginx’s default HTTP port is 80, this is a common mapping).

Step 3: Verify the Created Services.

It’s always good practice to verify that your Kubernetes objects have been created as expected.

Command:

kubectl get services

Explanation of the Command:

Expected Output (similar to this):

NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes    ClusterIP   10.96.0.1     <none>        443/TCP        XdYh
my-service1   NodePort    10.101.X.Y    <none>        80:3XXXX/TCP   XsYm

Interpreting the Output:

To Access Your Nginx Service:

Once you have the NodePort (e.g., 30080) from the kubectl get services output, you can access your Nginx web server from your local machine (or any machine that can reach your Kubernetes worker nodes) by pointing your web browser to:

http://<IP_ADDRESS_OF_ANY_WORKER_NODE>:<NodePort>

For example, if one of your worker nodes has the IP 192.168.1.100 and the NodePort is 30080, you would go to http://192.168.1.100:30080. You should see the default Nginx welcome page.


Let’s move on to Task 2: Managing Kubernetes Pods and Services. This task focuses on inspecting, labeling, and interacting with individual Pods, which are the core units running your containers.

You’ll need to have completed Task 1 (creating my-deployment1 and my-service1) for some of these steps to be relevant.


Task 2: Managing Kubernetes Pods and Services

This task will guide you through inspecting Pods, adding labels to them, and running a temporary test Pod to demonstrate logging.


Step 1: Get the list of Pods

Before you can manage individual Pods, you need to know their names. This command will list all Pods in your current namespace (by default, the default namespace). This list will include the Pod(s) created by your my-deployment1 Deployment from Task 1.

Command:

kubectl get pods

Explanation:

Expected Output (example):

You will likely see one or more Pods with names starting with my-deployment1- followed by a unique hash, and their status (e.g., Running).

NAME                             READY   STATUS    RESTARTS   AGE
my-deployment1-6789b7b9b-abcde   1/1     Running   0          5m

(Note: The exact hash part of your Pod name will be different.)

Take note of the full name of one of your my-deployment1 Pods. You’ll need it for the next steps. For example, if your Pod name is my-deployment1-6789b7b9b-abcde, you’ll use that.


Step 2: Show labels for a specific Pod

Labels are key-value pairs used to identify and organize Kubernetes objects. Pods automatically inherit some labels from their managing Deployment or ReplicaSet. Let’s inspect them.

Command:

kubectl get pod <pod-name> --show-labels

Before running, replace <pod-name> with the actual name of one of your my-deployment1 Pods from Step 1.

Example: If your Pod name is my-deployment1-6789b7b9b-abcde:

kubectl get pod my-deployment1-6789b7b9b-abcde --show-labels

Explanation:

Expected Output (example):

You’ll see the Pod details along with a LABELS column at the end:

NAME                             READY   STATUS    RESTARTS   AGE     LABELS
my-deployment1-6789b7b9b-abcde   1/1     Running   0          6m      app=my-deployment1,pod-template-hash=6789b7b9b

In this example, you see labels like app=my-deployment1 (inherited from the Deployment) and pod-template-hash=... (used internally by Kubernetes).


Step 3: Label the Pod

You can add custom labels to existing Kubernetes objects. This is useful for further categorization or for creating custom selection criteria.

Command:

kubectl label pods <pod-name> environment=deployment

Before running, replace <pod-name> with the actual name of the same Pod you used in Step 2.

Example: If your Pod name is my-deployment1-6789b7b9b-abcde:

kubectl label pods my-deployment1-6789b7b9b-abcde environment=deployment

Explanation:

Expected Output:

pod/<pod-name> labeled

Step 4: Show labels again to confirm the new label

After adding the label, it’s good to verify that it was applied successfully.

Command:

kubectl get pod <pod-name> --show-labels

Before running, replace <pod-name> with the actual name of the same Pod.

Example: If your Pod name is my-deployment1-6789b7b9b-abcde:

kubectl get pod my-deployment1-6789b7b9b-abcde --show-labels

Expected Output (example):

You should now see your newly added environment=deployment label in the LABELS column:

NAME                             READY   STATUS    RESTARTS   AGE     LABELS
my-deployment1-6789b7b9b-abcde   1/1     Running   0          7m      app=my-deployment1,environment=deployment,pod-template-hash=6789b7b9b

Step 5: Run a test Pod using the nginx image

This step demonstrates how to create a single, standalone Pod using an imperative kubectl run command. The --restart=Never flag is important here, as it tells Kubernetes that this Pod should not be automatically restarted if it exits or fails. This is typical for “Job-like” or temporary test Pods.

Command:

kubectl run my-test-pod --image=nginx --restart=Never

Explanation:

Expected Output:

pod/my-test-pod created

You can verify it’s running (or has completed) by running kubectl get pods again. It will appear alongside your my-deployment1 Pods.


Step 6: Show logs for the test Pod

One of the most frequent tasks in troubleshooting Kubernetes applications is viewing their logs. kubectl logs allows you to retrieve the standard output and standard error streams from containers running inside your Pods.

Command:

kubectl logs my-test-pod

Explanation:

Expected Output (example):

You will see the Nginx access and error logs, as Nginx starts up and serves requests (even if no external requests are made yet, you’ll see startup logs).

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to interpret files in order of names:
/docker-entrypoint.sh: running /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
... (Nginx startup logs) ...
2025/06/02 14:45:00 [notice] 1#1: using the "epoll" event method
2025/06/02 14:45:00 [notice] 1#1: nginx/1.27.0
2025/06/02 14:45:00 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2025/06/02 14:45:00 [notice] 1#1: OS: Linux 5.10.0-27-cloud-amd64
225.1.1.2 - - [02/Jun/2025:09:00:00 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.29" "-"

(The specific logs will depend on the Nginx version and any internal health checks Kubernetes might run.)


By completing Task 2, you’ve gained practical experience with:


Let’s tackle Task 3: Deploying a StatefulSet. This is a more advanced Kubernetes object, crucial for managing applications that require stable network identities, ordered deployments, and persistent storage – typical for databases or clustered applications.


Task 3: Deploying a StatefulSet

Goal: To deploy a stateful application (in this case, Nginx, but configured with persistent storage and unique identities) using a Kubernetes StatefulSet.

Recall that while a Deployment is great for stateless applications, a StatefulSet provides stronger guarantees for stateful workloads.


Step 1: Create and open a file named statefulset.yaml in edit mode.

First, you’ll create an empty file.

Command:

touch statefulset.yaml

Explanation:


Step 2: Open statefulset.yaml and add the provided code, then save the file.

You’ll need a text editor for this. Common choices include nano, vim, or any graphical editor if you’re on a desktop environment (e.g., VS Code, Sublime Text).

Command (example using nano):

nano statefulset.yaml

Once the editor opens, paste the following YAML content into it:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-statefulset
spec:
  serviceName: "nginx-headless" # Changed name to avoid conflict with service from task 1
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts: # IMPORTANT: Add this section for persistent storage
        - name: www
          mountPath: /usr/share/nginx/html # Default Nginx serving directory
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

Important Correction: I’ve changed spec.serviceName: "nginx" to spec.serviceName: "nginx-headless". While the original caption used nginx, it’s highly recommended to use a headless Service for StatefulSets. A headless Service provides a unique DNS entry for each Pod in the StatefulSet, which is crucial for their stable network identities. Using the same name nginx might conflict or cause confusion if you also have a regular Service named nginx. Also, I’ve added a volumeMounts section under spec.template.spec.containers to actually use the persistent storage. Without it, the storage would be provisioned but not mounted into the container.

Explanation of the YAML Fields:

Save the file after pasting the content (in nano, press Ctrl+S, then Ctrl+X).


Step 3: Apply the StatefulSet configuration.

Now, use kubectl apply to create the StatefulSet and its associated resources in your cluster. This is the declarative way of managing Kubernetes objects.

Command:

kubectl apply -f statefulset.yaml

Explanation:

Expected Output:

statefulset.apps/my-statefulset created

Step 4: Verify that the StatefulSet is created.

After applying the configuration, it’s essential to verify that your StatefulSet, its Pods, and the associated PersistentVolumeClaims have been created correctly.

Command (to check StatefulSet):

kubectl get statefulsets

Expected Output (example):

NAME             READY   AGE
my-statefulset   0/3     0s # It will take a moment for Pods to spin up and become ready

Keep running this command until READY shows 3/3. This indicates all three Pods in your StatefulSet are running.

Command (to check Pods):

kubectl get pods -l app=nginx # Using the label defined in the StatefulSet

Expected Output (example - will show Pods being created in order):

NAME               READY   STATUS    RESTARTS   AGE
my-statefulset-0   1/1     Running   0          30s
my-statefulset-1   1/1     Running   0          25s
my-statefulset-2   1/1     Running   0          20s

Notice the stable, ordered names (my-statefulset-0, -1, -2).

Command (to check PersistentVolumeClaims - PVCs):

kubectl get pvc

Expected Output (example):

NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-my-statefulset-0   Bound    pvc-abcd-1234-abcd-1234-abcd               1Gi        RWO            standard       1m
www-my-statefulset-1   Bound    pvc-efgh-5678-efgh-5678-efgh               1Gi        RWO            standard       1m
www-my-statefulset-2   Bound    pvc-ijkl-9012-ijkl-9012-ijkl               1Gi        RWO            standard       1m

You’ll see a PVC for each Pod in the StatefulSet, with names like www-my-statefulset-0, www-my-statefulset-1, etc., reflecting their stable identity.


Excellent! Let’s proceed with Task 4: Implementing a DaemonSet. This is a powerful Kubernetes object for ensuring that a specific Pod runs on all (or a selected subset of) your cluster nodes. It’s ideal for system-level services like logging agents, monitoring agents, or network proxies that need to be present on every worker machine.


Task 4: Implementing a DaemonSet

Goal: To deploy a DaemonSet that ensures a copy of a specific Pod (in this case, an Nginx container as an example) runs on all available nodes in your Kubernetes cluster.


Step 1: Create a file named daemonset.yaml and open it in edit mode.

First, let’s create the empty YAML file where you’ll define your DaemonSet.

Command:

touch daemonset.yaml

Explanation:


Step 2: Open daemonset.yaml and add the provided code, then save the file.

Now, open the daemonset.yaml file using your preferred text editor (like nano or vim) and paste the following content into it:

Command (example using nano):

nano daemonset.yaml

Paste the following YAML:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: my-daemonset
spec:
  selector:
    matchLabels:
      name: my-daemonset
  template:
    metadata:
      labels:
        name: my-daemonset
    spec:
      containers:
      - name: my-daemonset-container # Changed container name for clarity
        image: nginx

Explanation of the YAML Fields:

Save the file after pasting the content. (In nano, press Ctrl+O to write out, then Enter to confirm, then Ctrl+X to exit.)


Step 3: Apply the DaemonSet

Now, apply the DaemonSet configuration to your Kubernetes cluster using kubectl apply.

Command:

kubectl apply -f daemonset.yaml

Explanation:

Expected Output:

daemonset.apps/my-daemonset created

Step 4: Verify that the DaemonSet has been created and its Pods are running

After applying, you should verify that the DaemonSet has been successfully created and that it has launched Pods on your cluster’s nodes.

Command (to check the DaemonSet itself):

kubectl get daemonsets

Explanation:

Expected Output (example):

The output will provide key information about your my-daemonset. The DESIRED and CURRENT columns should ideally match the number of worker nodes in your cluster.

NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
my-daemonset   X         X         X       X            X           <none>          YsZm

Command (to check the Pods created by the DaemonSet):

You can also list the individual Pods to see them running on different nodes.

kubectl get pods -o wide -l name=my-daemonset # Using the label defined in the DaemonSet

Explanation:

Expected Output (example):

You should see one Pod for each of your worker nodes (or eligible nodes if you had a node selector).

NAME                  READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
my-daemonset-abcde    1/1     Running   0          XsYm  10.x.x.x     node1.example.com   <none>           <none>
my-daemonset-fghij    1/1     Running   0          XsYm  10.y.y.y     node2.example.com   <none>           <none>
# ... and so on for each node ...

Notice that each Pod created by the DaemonSet has a unique suffix and is running on a different NODE.


Conclusion

Congratulations! You have successfully completed the practice lab on Kubernetes!

You’ve gone through several foundational and important tasks:

These tasks cover essential concepts and kubectl operations that are fundamental to working effectively with Kubernetes in real-world scenarios. Keep practicing and exploring the vast capabilities of Kubernetes!

Let’s clarify the distinction between Ingress Objects and Ingress Controllers in Kubernetes. This is a common point of confusion, but understanding their roles is key to effectively managing external access to your applications.


Ingress Objects vs. Ingress Controllers: The Two Sides of External Access

In Kubernetes, providing external access to services, especially for HTTP and HTTPS traffic, involves two core components: the Ingress API object and the Ingress Controller. They work together to expose your applications to the outside world.

Ingress Objects: The “What” (The Rules)

The Ingress object is a Kubernetes API resource that you define (typically in a YAML file) to describe how external traffic should be routed to your internal cluster services. Think of it as a set of declarative traffic rules or a high-level configuration.

Key characteristics of Ingress Objects:

Analogy: An Ingress object is like writing down a set of instructions on how traffic should be directed at a large intersection: “Cars going to the library should turn left here,” “Cars going to the hospital should go straight,” etc.

Ingress Controllers: The “How” (The Executor)

The Ingress Controller is a separate, active component that runs as a Pod (or set of Pods) within your Kubernetes cluster. Its responsibility is to implement the rules specified by the Ingress API objects. It continuously watches the Kubernetes API for new or updated Ingress objects and then configures an actual load balancer or proxy to fulfill those rules.

Key characteristics of Ingress Controllers:

Analogy: The Ingress Controller is like the actual traffic police officer or the automated traffic light system that reads those instructions (Ingress objects) and then physically directs the cars (traffic) according to the rules.

Ingress Objects vs. Ingress Controllers: A Feature Comparison

FeatureIngress ObjectsIngress Controllers
DefinitionKubernetes API object (a YAML manifest)A running Pod (or set of Pods) within the cluster
Primary FunctionDefines the rules for external access and routingImplements those rules; acts as the traffic proxy/load balancer
Configuration SourceRules are defined directly in the Ingress resource YAMLReads and processes information from Ingress objects
Traffic HandlingSpecifies HTTP/HTTPS routes, hosts, pathsUtilizes a load balancer, configures frontends for traffic, performs TLS termination
ActivationCreated and configured like any other Kubernetes object (e.g., kubectl apply -f ingress.yaml)Must be explicitly deployed and running in the cluster for Ingress objects to have any effect
Handling ProtocolsFocused on defining rules for HTTP and HTTPS trafficThe actual component that processes and routes HTTP/HTTPS (and potentially other) traffic
Automatic StartupActivated upon configuration with an Ingress resource (but only if a controller is present)Requires explicit activation/deployment in the cluster
AnalogyThe architectural blueprint; the rule set for trafficThe actual builder/executor; the traffic director (e.g., an NGINX instance)

Conclusion

In Kubernetes, overseeing external access to your applications is a collaborative effort between Ingress objects and Ingress controllers.


This is an excellent summary of crucial Kubernetes anti-patterns and their corresponding best practices. Avoiding these common pitfalls is vital for building robust, scalable, and maintainable applications on Kubernetes.

Let’s reiterate and slightly expand on these points for maximum clarity.


Kubernetes Antipatterns: Pitfalls to Avoid for Robust Deployments

Kubernetes is a powerful platform, but without adhering to best practices, it’s easy to fall into “antipatterns” – practices that seem intuitive but ultimately lead to complications, instability, and increased operational overhead. Identifying and avoiding these is crucial for maintaining a healthy container orchestration environment.

Here are ten prevalent Kubernetes anti-patterns and the recommended alternative practices:


1. Anti-pattern: Baking Configuration into Container Images

Issue: Embedding environment-specific configurations (like hardcoded IP addresses, database credentials, or environment-specific prefixes) directly into your Docker images. This leads to:

Best Practice: Create generic, immutable container images independent of runtime settings.


2. Anti-pattern: Mixing Application and Infrastructure Deployment in a Single Pipeline

Issue: Using one continuous integration/continuous delivery (CI/CD) pipeline to deploy both your application code and your underlying infrastructure (e.g., Kubernetes cluster, networking, databases).

Best Practice: Separate infrastructure and application deployment into distinct pipelines.


3. Anti-pattern: Relying on Specific Deployment Order

Issue: Assuming or enforcing a specific startup order for application components (e.g., database must be up before API, API before frontend). In Kubernetes, Pods and containers are started concurrently.

Best Practice: Design applications to be resilient and tolerant of simultaneous component initiation and transient failures.


4. Anti-pattern: Not Setting Memory and CPU Limits for Pods

Issue: Running Pods without specifying requests and limits for CPU and memory resources.

Best Practice: Establish resource requests and limits for all containers within your Pods.


5. Anti-pattern: Pulling the latest Tag in Production

Issue: Using mutable image tags like :latest in production environments.

Best Practice: Use specific, immutable, and meaningful image tags in production.


6. Anti-pattern: Consolidating Production and Non-Production Workloads in a Single Cluster

Issue: Running development, testing, staging, and production workloads within the same Kubernetes cluster.

Best Practice: Maintain at least two separate clusters: one for production and one for non-production (e.g., development, staging).


7. Anti-pattern: Ad-Hoc Deployments with kubectl edit/kubectl patch

Issue: Making direct, manual modifications to live Kubernetes objects using commands like kubectl edit or kubectl patch without updating the source configuration files (e.g., YAML in Git).

Best Practice: Implement GitOps principles: Conduct all deployments and configuration changes through Git commits.


8. Anti-pattern: Neglecting Health Checks or Using Overly Complex Probes

Issue: Not configuring liveness and readiness probes, or designing them poorly.

Best Practice: Configure robust liveness and readiness probes for each container.


9. Anti-pattern: Embedding Secrets or Poor Secret Handling

Issue: Storing sensitive information (passwords, API keys, certificates) directly in container images, Git repositories, or using inconsistent/insecure methods for injection.

Best Practice: Use a consistent and secure secret handling strategy, typically involving Kubernetes Secrets or dedicated secret management systems.


10. Anti-pattern: Direct Pod Usage or Running Multiple Processes Per Container

Issue:

Best Practice:


Okay, let’s proceed with verifying your Kubernetes environment and command-line tools. This is a crucial first step for any Kubernetes lab or project, ensuring that your setup is ready to interact with a cluster.


Verify the Environment and Command Line Tools

Step 1: Open a Terminal Window (if not already open)

If you don’t already have a terminal window open in your integrated development environment (IDE) or local setup, please open one.

(Note: If a terminal is already visible and active, you can skip this step.)


Step 2: Verify that kubectl CLI is installed.

This command checks if the kubectl command-line tool is installed and can communicate with a Kubernetes cluster. It will show you the client and server versions.

Command:

kubectl version

Explanation:

Expected Output (similar to this, versions may vary):

Client Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.0", GitCommit:"<some-hash>", GitTreeState:"clean", BuildDate:"...", GoVersion:"...", Compiler:"gc", Platform:"..."}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.0", GitCommit:"<some-hash>", GitTreeState:"clean", BuildDate:"...", GoVersion:"...", Compiler:"gc", Platform:"..."}

Step 3: Change to your project folder.

It’s good practice to organize your lab files within a specific project directory.

Command:

cd /home/project

Explanation:

(Note: Please skip this step if you are already in the /home/project directory.)


Step 4: Clone the git repository that contains the artifacts needed for this lab, if it doesn’t already exist.

This command will download the necessary lab files from a Git repository. It uses a conditional statement to only clone the repository if the CC201 directory doesn’t already exist, preventing errors if you run the command multiple times.

Command:

[ ! -d 'CC201' ] && git clone https://github.com/ibm-developer-skills-network/CC201.git

Explanation:

Expected Output:


Step 5: Change to the directory for this lab.

Now, navigate into the specific lab directory within the cloned repository.

Command:

cd CC201/labs/2_IntroKubernetes/

Explanation:


Step 6: List the contents of this directory to see the artifacts for this lab.

This command will show you the files and subdirectories present in your current working directory, confirming you are in the correct place and can see the lab artifacts.

Command:

ls

Explanation:

Expected Output (example):

You should see a list of files that are part of this specific lab, such as:

daemonset.yaml  deployment.yaml  README.md  service.yaml  statefulset.yaml

This confirms your environment is set up correctly, kubectl is working, and you have access to the necessary lab files. You are now ready to proceed with Kubernetes tasks!

Okay, let’s explore some basic kubectl commands that help you understand your Kubernetes configuration and interact with your cluster’s namespaces.

As a reminder, kubectl needs to know which cluster to talk to and with what credentials. This information is stored in a kubeconfig file.


Use the kubectl CLI: Exploring Contexts and Clusters

Step 1: Get Cluster Information

This command displays the names of the Kubernetes clusters that kubectl is configured to interact with. Your kubeconfig file stores connection details for one or more clusters.

Command:

kubectl config get-clusters

Explanation:

Expected Output (example):

You should see at least one cluster listed. The name might vary depending on your Kubernetes environment (e.g., kubernetes, minikube, gke_project-name_zone_cluster-name, docker-desktop).

NAME
cluster-name-example

Step 2: View Your Current Context

A kubectl context is a convenient way to group access parameters: a specific cluster, a user (credentials to access that cluster), and a namespace (the default namespace for that cluster for subsequent commands). Viewing your current context helps you understand which cluster and namespace your kubectl commands will target.

Command:

kubectl config get-contexts

Explanation:

Expected Output (example):

You will see a list of contexts. One will be marked with an asterisk, indicating it’s the active context.

CURRENT   NAME                      CLUSTER                  AUTHINFO         NAMESPACE
* current-context-name      cluster-name-example     user-name-example   default

Step 3: List all the Pods in your namespace.

This command will show you any Pods currently running in the default namespace (or whatever namespace is specified in your current context).

Command:

kubectl get pods

Explanation:

Expected Output:

If this is a new session or a clean cluster/namespace, you might see “No resources found in default namespace.”

No resources found in default namespace.

If you have previously run other tasks (like Task 1 or Task 3), you might see some Pods listed, for example:

NAME                             READY   STATUS    RESTARTS   AGE
my-deployment1-6789b7b9b-abcde   1/1     Running   0          25m
my-statefulset-0                 1/1     Running   0          10m
my-statefulset-1                 1/1     Running   0          9m
my-statefulset-2                 1/1     Running   0          8m
my-test-pod                      0/1     Completed 0          15m

This command is fundamental for checking the status of your deployed applications.


Let’s proceed with creating and managing your first Pod using an imperative command. This will involve setting up your environment, building and pushing a container image, and then deploying and inspecting it in Kubernetes.


Create a Pod with an Imperative Command

This section demonstrates the imperative approach to Pod creation, where you directly instruct Kubernetes what to do on the command line.

Step 1: Export your namespace as an environment variable.

It’s common practice to store frequently used values like your namespace in an environment variable. This makes commands more readable and less prone to typos. Make sure $USERNAME is correctly replaced by your actual username in your lab environment.

Command:

export MY_NAMESPACE=sn-labs-$USERNAME

Explanation:

(No direct output from this command.)


Step 2: Navigate to the Dockerfile and review it.

This step instructs you to visually inspect the Dockerfile that will be used to build your hello-world image.

(No command to run here; this is a navigation and inspection step.)


Step 3: Build and push the image again.

It’s a good habit to rebuild and push your Docker image before deploying to ensure you’re using the latest version, especially if it’s been a while since your last lab session. This command builds the hello-world:1 image and pushes it to your designated IBM Cloud Container Registry namespace.

Command:

docker build -t us.icr.io/$MY_NAMESPACE/hello-world:1 . && docker push us.icr.io/$MY_NAMESPACE/hello-world:1

Explanation:

Expected Output: You will see output from the Docker build process (steps of building the image) and then output from the Docker push process (uploading layers to the registry). This might take a few moments.


Step 4: Run the hello-world image as a container in Kubernetes.

Now, you’ll create a Pod using an imperative kubectl run command. This command tells Kubernetes to create a Pod directly. The --overrides option is used to inject imagePullSecrets, which are necessary for Kubernetes to authenticate with IBM Cloud Container Registry and pull your private image.

Command:

kubectl run hello-world --image us.icr.io/$MY_NAMESPACE/hello-world:1 --overrides='{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"icr"}]}}}}'

Explanation:

Expected Output:

pod/hello-world created

Step 5: List the Pods in your namespace.

Verify that your hello-world Pod has been created and its status.

Command:

kubectl get pods

Explanation:

Expected Output (example):

You should see your hello-world Pod listed. Its status might initially be ContainerCreating and then transition to Running.

NAME          READY   STATUS              RESTARTS   AGE
hello-world   0/1     ContainerCreating   0          5s
# ... after a few moments ...
hello-world   1/1     Running             0          15s

Step 6: Get more details about the Pod using the wide option.

The -o wide option provides additional useful information directly in the table output, such as the Pod’s IP address and the Node it’s running on.

Command:

kubectl get pods -o wide

Explanation:

Expected Output (example):

NAME          READY   STATUS    RESTARTS   AGE   IP           NODE               NOMINATED NODE   READINESS GATES
hello-world   1/1     Running   0          1m    10.x.x.x     your-worker-node   <none>           <none>

Step 7: Describe the Pod to get even more details.

The kubectl describe command provides an extensive, human-readable summary of a resource, including events, container status, and conditions. This is invaluable for debugging.

Command:

kubectl describe pod hello-world

Explanation:

Expected Output (example - this is a long output, showing key sections):

Name:             hello-world
Namespace:        default
Priority:         0
Node:             your-worker-node/10.x.x.x
Start Time:       Mon, 02 Jun 2025 15:00:00 +0545
Labels:           run=hello-world
Annotations:      <none>
Status:           Running
IP:               10.x.x.x
IPs:
  IP:  10.x.x.x
Containers:
  hello-world:
    Container ID:   containerd://<container-id-hash>
    Image:          us.icr.io/sn-labs-<USERNAME>/hello-world:1
    Image ID:       docker.io/library/hello-world@sha256:<image-id-hash>
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 02 Jun 2025 15:00:05 +0545
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:            <none>
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  1m     default-scheduler  Successfully assigned default/hello-world to your-worker-node
  Normal  Pulling    55s    kubelet            Pulling image "us.icr.io/sn-labs-<USERNAME>/hello-world:1"
  Normal  Pulled     45s    kubelet            Successfully pulled image "us.icr.io/sn-labs-<USERNAME>/hello-world:1" in 9.99s (9.99s total)
  Normal  Created    45s    kubelet            Created container hello-world
  Normal  Started    45s    kubelet            Started container hello-world

This output provides detailed information about:


Step 8: Delete the Pod.

Since this hello-world Pod was created imperatively and isn’t managed by a higher-level controller like a Deployment, you need to explicitly delete it.

Command:

kubectl delete pod hello-world

Explanation:

Expected Output:

pod "hello-world" deleted

(Please wait for the terminal prompt to reappear, as the deletion process takes a moment.)


Step 9: List the Pods to verify that none exist.

Confirm that the hello-world Pod has been successfully removed.

Command:

kubectl get pods

Expected Output:

No resources found in default namespace.

This confirms that the Pod you created imperatively has been successfully deleted.


Alright, let’s move on to creating a Pod using imperative object configuration. This approach combines the clarity of imperative commands with the reusability of configuration files, allowing you to explicitly state your desired action (like create) while referencing a detailed YAML definition.


Create a Pod with Imperative Object Configuration

In this method, you’ll use a pre-defined YAML file to describe your Pod and then instruct kubectl to create it.

Step 1: View and Edit the Configuration File (hello-world-create.yaml)

First, you’ll need to locate and modify the provided YAML file to include your specific namespace.

Once opened, you’ll see content similar to this:

apiVersion: v1
kind: Pod
metadata:
  name: hello-world
spec:
  containers:
  - name: hello-world
    image: us.icr.io/<my_namespace>/hello-world:1
  imagePullSecrets:
  - name: icr

Edit the file: Replace <my_namespace> with your actual namespace. Remember, you defined this earlier as sn-labs-$USERNAME. So, for example, if your username is user123, you’d change the line to:

    image: us.icr.io/sn-labs-user123/hello-world:1

Save the file after making this change.


Step 2: Imperatively Create a Pod using the Configuration File

Now that your hello-world-create.yaml file is updated with the correct image path, you can use kubectl create to deploy the Pod. This is an imperative command because you are explicitly telling Kubernetes to perform the “create” action.

Command:

kubectl create -f hello-world-create.yaml

Explanation:

Expected Output:

pod/hello-world created

Step 3: List the Pods in your namespace.

Let’s confirm that the Pod was successfully created and is running.

Command:

kubectl get pods

Explanation:

Expected Output (example):

You should see your hello-world Pod. It might show ContainerCreating initially, then transition to Running.

NAME          READY   STATUS    RESTARTS   AGE
hello-world   1/1     Running   0          XsYm

Step 4: Delete the Pod.

Since this Pod was created directly (even with a configuration file), you’ll explicitly delete it.

Command:

kubectl delete pod hello-world

Explanation:

Expected Output:

pod "hello-world" deleted

(Please wait for the terminal prompt to reappear, as the deletion process can take a few moments.)


Step 5: List the Pods to verify that none exist.

Finally, confirm that the Pod has been completely removed from your namespace.

Command:

kubectl get pods

Expected Output:

No resources found in default namespace.

Now we’ll work with the declarative command approach in Kubernetes. This is the recommended method for production environments because you define the desired state of your cluster in configuration files, and Kubernetes works to achieve and maintain that state. Instead of telling Kubernetes what to do (like create or delete), you tell it what you want.


Create a Pod with a Declarative Command (using a Deployment)

In this section, you’ll create a Deployment object, which in turn manages your Pods. This demonstrates the self-healing capabilities of Kubernetes.

Step 1: View and Edit the Configuration File (hello-world-apply.yaml)

You’ll use the Explorer to open and modify the provided YAML file.

You’ll see content similar to this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: us.icr.io/<my_namespace>/hello-world:1
      imagePullSecrets:
      - name: icr

Key observations about this file:

Edit the file: Replace <my_namespace> with your actual namespace (e.g., sn-labs-yourusername).

For example, if your username part is user123, change the line to:

        image: us.icr.io/sn-labs-user123/hello-world:1

Save the file after making this change.


Step 2: Use the kubectl apply command to set this configuration as the desired state.

This is the cornerstone of declarative management in Kubernetes. You are telling Kubernetes: “This is what I want the state of my cluster to be.” kubectl apply is intelligent enough to create the resource if it doesn’t exist or update it if it does.

Command:

kubectl apply -f hello-world-apply.yaml

Explanation:

Expected Output:

deployment.apps/hello-world created

Step 3: Get the Deployments to ensure that a Deployment was created.

Verify that the Deployment object itself has been created.

Command:

kubectl get deployments

Explanation:

Expected Output (example):

NAME          READY   UP-TO-DATE   AVAILABLE   AGE
hello-world   0/3     0            0           5s  # Initially, as pods are starting
# ... after a moment ...
hello-world   3/3     3            3           1m

The READY column will eventually show 3/3, indicating that three desired replicas are up and running.


Step 4: List the Pods to ensure that three replicas exist.

Confirm that the Deployment has successfully launched three Pods.

Command:

kubectl get pods

Explanation:

Expected Output (example):

You should see three Pods, all with names starting with hello-world- followed by a unique hash, and all in the Running status.

NAME                           READY   STATUS    RESTARTS   AGE
hello-world-774ddf45b5-28k7j   1/1     Running   0          45s
hello-world-774ddf45b5-9cbv2   1/1     Running   0          45s
hello-world-774ddf45b5-svpf7   1/1     Running   0          45s

Step 5: Observe Kubernetes’ Self-Healing: Delete a Pod

This is where the power of declarative management and controllers like Deployments truly shines. When you delete a Pod managed by a Deployment, Kubernetes will automatically create a new one to maintain the desired replica count (which is 3 in our hello-world Deployment).

Action:

  1. Note one of the Pod names from the output of the previous kubectl get pods command (e.g., hello-world-774ddf45b5-28k7j).
  2. Replace <pod_name> in the command below with the actual name you noted.

Command:

kubectl delete pod <pod_name> && kubectl get pods

Example (using a hypothetical pod name):

kubectl delete pod hello-world-774ddf45b5-28k7j && kubectl get pods

Explanation:

Expected Output (example, showing a Pod being terminated and then only two remaining for a brief moment):

pod "hello-world-774ddf45b5-28k7j" deleted
NAME                           READY   STATUS        RESTARTS   AGE
hello-world-774ddf45b5-9cbv2   1/1     Running       0          2m
hello-world-774ddf45b5-svpf7   1/1     Running       0          2m
hello-world-774ddf45b5-28k7j   0/1     Terminating   0          2m

You’ll see one Pod enter the Terminating state, and for a short period, kubectl get pods might show only two Running Pods, plus the one terminating.

(Please wait till the terminal prompt appears again after deletion.)


Step 6: List the Pods to see a new one being created.

Kubernetes will quickly detect that the desired replica count (3) is not met and will spin up a new Pod to replace the one you deleted. You might need to run this command a couple of times to see the new Pod appear and become Running.

Command:

kubectl get pods

Expected Output (example):

After a short while, you will again see three Running Pods. Note that the name of the new Pod will be different from the one you deleted.

NAME                           READY   STATUS    RESTARTS   AGE
hello-world-774ddf45b5-9cbv2   1/1     Running   0          3m
hello-world-774ddf45b5-svpf7   1/1     Running   0          3m
hello-world-774ddf45b5-xyz1a   1/1     Running   0          10s  # This is the newly created Pod

Conclusion:

This exercise beautifully illustrates the power of declarative management in Kubernetes. You declared your desired state (three hello-world replicas via a Deployment), and Kubernetes automatically worked to achieve and maintain that state, demonstrating its self-healing capabilities. This is why declarative management is the preferred method for production environments.


Alright, let’s explore how Kubernetes handles load balancing for your application. Since you have a Deployment with three replicas, Kubernetes will automatically distribute incoming requests among these Pods. To observe this, we’ll expose the application using a Kubernetes Service and then access it via a proxy.


Load Balancing the Application

This section demonstrates how Kubernetes provides load balancing across multiple instances of your application by using a Service.

Step 1: Expose your application to the internet using a Kubernetes Service.

You’ll create a Service that targets your hello-world Deployment. By default, kubectl expose will create a ClusterIP Service, which is accessible from within the cluster.

Command:

kubectl expose deployment/hello-world

Explanation:

Expected Output:

service/hello-world exposed

Step 2: List Services to see that this service was created.

Verify that your hello-world Service has been created.

Command:

kubectl get services

Explanation:

Expected Output (example):

You should see your hello-world Service with a ClusterIP type.

NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
hello-world   ClusterIP   10.x.x.x      <none>        80/TCP    XsYm
kubernetes    ClusterIP   10.x.x.x      <none>        443/TCP   YdZm

Step 3: Open a new split terminal window.

To run the kubectl proxy command, which blocks the terminal, you’ll need a separate terminal window.

(You will now have two active terminal windows/panes.)


Step 4: Run the Kubernetes API proxy in the new terminal window.

This command creates a proxy that allows you to access your Kubernetes API server and its proxied services from your local machine. This is useful for testing internal services without setting up external ingress or NodePorts.

Command (in the NEW SPLIT TERMINAL):

kubectl proxy

Explanation:

Expected Output (in the new terminal):

Starting to serve on 127.0.0.1:8001

(This command will keep running and won’t return to the prompt until you terminate it. Keep this terminal window open and running the proxy.)


Step 5: Ping the application to get a response (in the original terminal window).

Now, switch back to your original terminal window (where your environment variables are set). You’ll use curl to send a request through the kubectl proxy to your hello-world Service. Remember to substitute $USERNAME with your actual username.

Command (in the ORIGINAL TERMINAL):

curl -L localhost:8001/api/v1/namespaces/sn-labs-$USERNAME/services/hello-world/proxy

Explanation:

Expected Output (example):

You should see a response from your hello-world application, which includes the name of the Pod that handled the request.

Hello from hello-world-774ddf45b5-28k7j

(The specific Pod name will vary, but it will be one of your hello-world Pod replicas.)


Step 6: Send ten consecutive requests to observe load balancing.

Now, let’s send multiple requests quickly to see how Kubernetes distributes them across your three hello-world Pods.

Command (in the ORIGINAL TERMINAL):

for i in `seq 10`; do curl -L localhost:8001/api/v1/namespaces/sn-labs-$USERNAME/services/hello-world/proxy; done

Explanation:

Expected Output (example):

You will see 10 lines of output. As you review them, you should notice that the “Hello from…” message includes different Pod names, demonstrating that Kubernetes is indeed load balancing requests across your three hello-world Pod replicas.

Hello from hello-world-774ddf45b5-28k7j
Hello from hello-world-774ddf45b5-9cbv2
Hello from hello-world-774ddf45b5-svpf7
Hello from hello-world-774ddf45b5-28k7j
Hello from hello-world-774ddf45b5-svpf7
Hello from hello-world-774ddf45b5-9cbv2
Hello from hello-world-774ddf45b5-28k7j
Hello from hello-world-774ddf45b5-9cbv2
Hello from hello-world-774ddf45b5-svpf7
Hello from hello-world-774ddf45b5-28k7j

(The order and distribution of Pod names will vary.)


Step 7: Delete the Deployment and Service.

It’s good practice to clean up resources after you’re done with them. You can delete multiple resources with a single kubectl delete command by separating them with spaces.

Command (in the ORIGINAL TERMINAL):

kubectl delete deployment/hello-world service/hello-world

Explanation:

Expected Output:

deployment.apps "hello-world" deleted
service "hello-world" deleted

(Note: If you face any issues in typing further commands in the terminal, pressing Enter might help clear the line.)


Step 8: Return to the proxy terminal and kill it.

Go back to the new split terminal window where kubectl proxy is running. You need to terminate this process.

Action: Press Ctrl+C in the terminal window running kubectl proxy.

Expected Output (in the proxy terminal):

The proxy will stop, and the terminal prompt will return.

Starting to serve on 127.0.0.1:8001
^C

Congratulations! You have successfully completed the lab for the second module of this course!

You’ve learned how Kubernetes services provide stable network access and automatic load balancing to your replicated applications. This concludes the practical exercises for this module.


You’ve covered a comprehensive range of fundamental Kubernetes concepts! This summary effectively highlights the key takeaways from your learning module.

Here’s a structured summary and highlight of the Kubernetes basics you’ve mastered:


Summary & Highlights: Kubernetes Basics

This module has provided a solid foundation in understanding the core components, objects, and capabilities of Kubernetes, along with practical experience in deploying and managing applications.

1. The “Why” of Container Orchestration

2. What is Kubernetes?

3. Kubernetes Architecture: The Brains and the Brawn

4. Key Kubernetes Objects: The Building Blocks

You’ve learned about fundamental Kubernetes API objects used to define and manage your applications:

5. Kubernetes Capabilities: Why It’s Powerful

Kubernetes offers a rich set of features that make it a robust orchestration platform:

6. Service Types: Exposing Your Applications

Different Service types cater to various communication needs:

7. Advanced Controllers for Specific Use Cases

You also explored specialized workload controllers:

This comprehensive overview confirms your strong understanding of Kubernetes fundamentals, preparing you for more advanced topics and real-world deployments!


Here is your cheat sheet summarizing the kubectl commands you’ve learned for understanding Kubernetes architecture and managing resources:


Cheat Sheet: Understanding Kubernetes Architecture & kubectl Commands

This cheat sheet provides a quick reference for essential kubectl commands used to interact with your Kubernetes cluster and its various objects.

CommandDescription
for ... do ... doneRuns a command multiple times as specified in a loop. (A general shell construct, useful with kubectl).
kubectl apply -f <file.yaml>Applies a configuration to a resource. Creates the resource if it doesn’t exist, and updates it if it does. This is the declarative way.
kubectl config get-clustersDisplays the names of clusters defined in your kubeconfig file.
kubectl config get-contextsDisplays all contexts defined in your kubeconfig file, indicating the current active context.
kubectl create -f <file.yaml>Creates a resource by explicitly telling Kubernetes to perform the “create” action based on a configuration file. (Imperative object config).
kubectl delete <resource-type>/<name>Deletes resources from the cluster. Can delete multiple resources by separating them with spaces.
kubectl describe <resource-type> <name>Shows detailed information about a specific resource or group of resources, including events and status.
kubectl expose <resource-type>/<name>Exposes a resource (like a Deployment) to the network by creating a Kubernetes Service.
kubectl get <resource-type>Displays resources of a specific type (e.g., pods, deployments, services).
kubectl get podsLists all Pods in the current namespace.
kubectl get pods -o wideLists all Pods with additional details, such as Node name and IP address.
kubectl get deploymentsLists all Deployments created in the current namespace.
kubectl get servicesLists all Services created in the current namespace.
kubectl proxyCreates a local proxy server between your localhost and the Kubernetes API server, allowing internal cluster services to be accessed via localhost:8001.
kubectl run <name> --image <image>Creates and runs a particular image in a Pod. This is an imperative command.
kubectl versionPrints the client (kubectl) and server (API server) version information.

This is a very comprehensive glossary of Kubernetes basic terms! It covers many essential concepts needed to understand the platform.

Here’s the provided glossary, well-defined and ready for quick reference:


Glossary: Kubernetes Basics

TermDefinition
Automated bin packingIncreases resource utilization and cost savings by efficiently scheduling a mix of critical and best-effort workloads onto cluster nodes.
Batch executionManages finite or batch tasks, including continuous integration workloads. Jobs are designed to run to completion and automatically replace failed containers if configured to do so.
Cloud Controller ManagerA Kubernetes control plane component that embeds cloud-specific control logic. It links your cluster into your cloud provider’s API, separating interactions with the cloud platform from components that only interact with your cluster.
ClusterA set of worker machines, called Nodes, that run containerized applications. Every cluster has at least one worker Node.
Container OrchestrationA process that automates the container lifecycle of containerized applications, leading to faster deployments, reduced errors, higher availability, and more robust security.
Container RuntimeThe software responsible for running containers (e.g., containerd, CRI-O, Docker).
Control LoopA non-terminating feedback loop that regulates the state of a system. In Kubernetes, controllers are control loops.
Control PlaneThe orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers within the cluster. It’s the “brain” of the Kubernetes cluster.
ControllerIn Kubernetes, controllers are control loops that continuously watch the state of your cluster via the API Server, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state defined by your configurations.
Data (Worker) PlaneThe layer that provides capacity such as CPU, memory, network, and storage so that the containers can run and connect to a network. This is where your actual application workloads run.
DaemonSetAn object that ensures a copy of a specific Pod is running across all (or a defined subset of) Nodes in a cluster. Useful for system-level services.
Declarative ManagementA management approach where you express the desired state of your system (e.g., the number of replicas for an application), and Kubernetes actively works to ensure that the observed (current) state matches this desired state. You declare what you want, not how to achieve it.
DeploymentA higher-level object that provides declarative updates for Pods and ReplicaSets. Deployments run multiple replicas of an application by creating ReplicaSets and offer additional management capabilities (like rolling updates and rollbacks) on top of those ReplicaSets. They are generally suitable for stateless applications.
Designed for extensibilityA core feature of Kubernetes that allows adding new features and functionalities to your cluster without needing to modify the Kubernetes source code itself, often through custom resources and controllers.
Docker SwarmA container orchestration tool designed specifically to work with Docker Engine and other Docker tools, making it a popular choice for teams already working in Docker environments. It automates the deployment of containerized applications.
EcosystemIn the context of Kubernetes, this refers to the vast and rapidly growing composition of services, support, and tools that are widely available and integrate with Kubernetes.
etcdA highly available, distributed key-value store that serves as Kubernetes’ backing store for all cluster data. It is the single source of truth for the desired state and current state of a Kubernetes cluster.
EvictionThe process of terminating one or more Pods on Nodes, often initiated by the Kubelet when a Node experiences resource pressure or when a higher-priority Pod needs to be scheduled.
Imperative commandsCommands that explicitly create, update, or delete live objects directly on the command line (e.g., kubectl run, kubectl delete pod).
Imperative ManagementA management approach where you define explicit steps and actions (create, delete, update) to get to a desired state.
IngressAn API object that manages external access to the services in a cluster, typically HTTP and HTTPS traffic. It provides routing rules, SSL/TLS termination, and name-based virtual hosting. (Requires an Ingress Controller to function).
IPv4/IPv6 dual stackA Kubernetes networking capability that assigns both IPv4 and IPv6 addresses to Pods and Services, enabling dual-protocol communication.
JobA Kubernetes object that creates one or more Pods and ensures that a specified number of them successfully complete their tasks. Jobs are designed for finite or batch tasks and will retry Pods until completion.
KubectlAlso known as kubectl, it is the command-line tool for communicating with a Kubernetes cluster’s control plane, using the Kubernetes API.
KubeletThe primary “node agent” that runs on each Node. The Kubelet takes a set of PodSpecs (a YAML or JSON object that describes a Pod) provided primarily through the API Server and ensures that the containers described in those PodSpecs are running and healthy. The Kubelet does not manage containers that were not created by Kubernetes.
KubernetesThe de facto open-source platform standard for container orchestration. Developed by Google and maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes automates container management tasks, like deployment, storage provisioning, load balancing and scaling, service discovery, and fixing failed containers. Its open-source toolset and wide array of functionalities are very attractive to leading cloud providers.
Kubernetes APIThe application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster in etcd. All internal and external interactions with the cluster go through the API.
Kubernetes API ServerThe central component of the Kubernetes Control Plane. It validates and configures data for API objects (Pods, Services, Deployments, etc.), services REST operations, and provides the frontend to the cluster’s shared state through which all other components interact.
Kubernetes Controller ManagerA Control Plane component that runs all the core controller processes (e.g., Replication Controller, Endpoints Controller, Namespace Controller, Service Accounts Controller) that monitor the cluster state and ensures that the actual state of a cluster matches the desired state.
Kubernetes Proxy (kube-proxy)A network proxy that runs on each Node in a cluster. This proxy maintains network rules on nodes, enabling network communication to Pods running on those nodes. It facilitates access to workloads running on the cluster.
kube-schedulerA Control Plane component that watches for newly created Pods with no assigned Node, and selects a suitable Node for them to run on based on various constraints and available resources.
Label SelectorA mechanism that allows users to filter a list of resources (like Pods) based on their assigned Labels.
LabelsKey-value pairs that are attached to Kubernetes objects (like Pods, Deployments, Services). They are used to tag objects with identifying attributes that are meaningful and relevant to users, and are crucial for organizing and selecting resources.
Load balancingThe process of distributing network traffic across multiple Pods (or other backend instances) to ensure better performance, high availability, and efficient resource utilization.
MarathonAn Apache Mesos framework. Apache Mesos is an open-source cluster manager that allows users to scale container infrastructure through the automation of most management and monitoring tasks. Marathon is a system for orchestrating long-running services and batch jobs on Mesos.
NamespaceAn abstraction provided by Kubernetes to support isolation of groups of resources within a single cluster. Namespaces are used for logical separation, resource quotas, and access control.
NodeThe worker machine in a Kubernetes cluster. User applications are run on Nodes. Nodes can be virtual or physical machines, and each Node is managed by the Control Plane and is able to run Pods.
Nomad (HashiCorp)A free and open-source cluster management and scheduling tool that supports Docker and other applications on all major operating systems across all infrastructure, whether on-premises or in the cloud. It offers flexibility for managing various types and levels of workloads.
ObjectAn entity in the Kubernetes system. The Kubernetes API uses these entities (e.g., Pods, Deployments, Services) to represent the desired state and current state of your cluster.
PersistenceIn Kubernetes, ensures that an object or its associated data exists in the system (e.g., on disk or in etcd) and survives Pod restarts or Node failures, until the object is explicitly modified or removed.
PodThe smallest and simplest Kubernetes object. It represents a single instance of an application process running in a cluster. A Pod usually encapsulates a single container but can, in some cases, encapsulate multiple tightly coupled containers (sidecars) that share resources, network, and storage.
PreemptionA scheduling mechanism in Kubernetes where the scheduler helps a pending (un-scheduled) Pod find a suitable Node by evicting one or more lower-priority Pods already existing on that Node, if necessary.
ProxyIn computing, a server that acts as an intermediary for a remote service, forwarding requests and responses. In Kubernetes, kube-proxy and kubectl proxy are examples.
ReplicaSetA Kubernetes object that (aims to) maintain a specified set of replica Pods running at any given time. It ensures the availability of a fixed number of identical Pods. (Often managed indirectly by Deployments).
Self-healingA core Kubernetes capability where it automatically detects and remedies issues, such as restarting failed containers, replacing unresponsive Pods, rescheduling Pods from failed Nodes, and killing containers that don’t respond to health checks.
ServiceAn abstract way to expose an application running on a set of Pods as a network service. It provides a stable IP address and DNS name, acting as a load balancer for traffic directed to the underlying Pods.
Service DiscoveryThe process by which applications running within Kubernetes can find and communicate with other services or Pods, typically using their stable IP addresses or a single DNS name provided by Services.
StatefulSetA workload API object that manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods (e.g., stable network identities, stable persistent storage). It is used for stateful applications.
StorageA data store that supports persistent and temporary storage for Pods, enabling applications to store and retrieve data beyond the lifecycle of individual Pods.
Storage OrchestrationA Kubernetes capability that automatically mounts your chosen storage system into Pods, whether it’s local storage, network-attached storage (like NFS, iSCSI), or cloud-provider specific storage solutions.
WorkloadIn Kubernetes, a workload refers to an application (or a component of an application) running on the cluster. Kubernetes provides various workload resources (like Deployments, StatefulSets, DaemonSets, Jobs) to manage different types of workloads.

Suggest Changes

Previous Post
Docker, Kubernetes, and OpenShift Cheat sheets
Next Post
Managing Applications with Kubernetes.