Pods
Categories:
4 minute read
In AKS you don’t deploy containers like you would do in a simple container system like Azure Container Instances or Docker, you instead deploy pods that are configured to host containers that share the same IP address, port space, and storage, allowing them to communicate easily. In essence, a pod is a Kubernetes abstraction that represents a group of one or more application containers and some shared resources.
Definition of Pod in AKS
Where a container is an application and its runtime dependencies put together as a single instance, a pod is the smallest deployable unit in Kubernetes which can contain one or more containers. In AKS a pod represents a single instance of your application and by default have a 1:1 mapping to a container.
Pod vs Container
While a container encapsulates an application and its runtime dependencies, a pod can hold one or more such containers, serving as the smallest deployable units in Kubernetes.
Characteristics of Pods
- Ephemeral Nature: Pods can be created, terminated, and replaced as required, showcasing their transient existence.
- Unique IP Address: Each pod is allocated a unique IP address, shared by its containers, facilitating network segmentation and service discovery.
- Storage Volumes: Pods can specify storage volumes shared among their containers, allowing for data persistence and sharing.
- Communication within Pod: Containers within the same pod share the same network namespace, ensuring smooth inter-container communication through localhost.
- Interaction with kubectl: Pods can be created, viewed, and deleted directly using kubectl, the command-line interface for Kubernetes.
Advanced Configurations
In advanced setups you can schedule multiple containers to run from the same pod. The containers in the same pod are always scheduled together, run in the same context, and can easily communicate as they share the same network namespace. A common use case for multi-container pods is when one container relies on another for a specific feature, like a main application container and a helper function container. These related containers can be deployed within the same pod.
Multi-Container Pods: Pods can host multiple containers, which are always scheduled together and operate in the same context, ideal for related containers with interdependencies.
Resource Management: Proper resource allocation is crucial to ensure efficient distribution of resources amongst the pods and avoid any single pod from monopolizing available resources.
Resource Limits: It is equally important to set limits on your resources to ensure resource allocation, prevent resource contention, and maintain stability and performance of applications running in Kubernetes clusters.
As shown below in the output of a kubectl command to list the pods in a cluster. The webpage-frontend is a single container hosted in an Azure Container Registry. The deployment file for the container scheduled 2 replicas which created two pod instances.
PS C:\kubectl get pods
NAME READY STATUS RESTARTS AGE
webpage-frontend-d67b95c64-cj5gs 1/1 Running 0 22h
webpage-frontend-d67b95c64-hdjhw 1/1 Running 0 22h
Best Practice: Single Process per Container
When working with Azure Kubernetes Service (AKS), follow the “single process per container” principle. Each container should ideally run a single application process. If multiple related application processes need to be co-scheduled, decouple them into separate containers instead of separate processes within the same container.
Here are the key benefits:
- Isolation and Modularity: Separate containers provide better isolation and modularity, preventing issues in one process from affecting others.
- Scalability and Resource Management: Separate containers allow independent scaling based on specific resource requirements, improving resource allocation efficiency.
- Ease of Deployment and Management: Managing separate containers is easier than managing multiple processes within a single container.
- Service Discovery and Load Balancing: Separate containers simplify service discovery and load balancing mechanisms.
- Flexibility and Compatibility: Decoupling processes into separate containers provides flexibility in choosing container images, configurations, and environments.
To implement this practice, analyze your application architecture, identify related processes, and design container images, deployment configurations, and inter-container communication mechanisms accordingly. Leverage Kubernetes concepts like Deployments, Services, and Pod-to-Pod communication for seamless interaction between containers.