Kubernetes Core Objects Complete Tutorial: Master Pod, Deployment & Service

Kubernetes Core Objects Complete Tutorial: Master Pod, Deployment & Service
To learn Kubernetes, the most important thing is understanding "objects."
Pod, Deployment, Service... these terms are easy to confuse at first. But once you understand their respective roles, Kubernetes becomes much clearer.
This article will walk you through the most important core objects in Kubernetes.
For a basic introduction to Kubernetes, see Kubernetes Complete Guide. For architecture explanation, see Kubernetes Architecture Complete Guide.
Pod Complete Analysis
Pod is the most basic deployment unit in Kubernetes.
What is a Pod
One-sentence explanation: A Pod is a combination of one or more containers, the smallest unit Kubernetes schedules.
Key understanding:
- You don't directly deploy containers—you deploy Pods
- Containers in a Pod share network and storage
- Pods are ephemeral and can be deleted and recreated at any time
| Feature | Description |
|---|---|
| Smallest unit | Kubernetes doesn't manage individual containers, it manages Pods |
| Shared network | Containers in same Pod communicate via localhost |
| Shared storage | Can mount the same Volume |
| Ephemeral | When a Pod dies, it's gone |
Single-Container vs Multi-Container Pod
Single-container Pod (most common):
Only one container in a Pod. This is the case 90%+ of the time.
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
Multi-container Pod:
Used only when multiple containers need to work closely together.
| Pattern | Description | Example |
|---|---|---|
| Sidecar | Assists main container | Log collection, proxy |
| Ambassador | Proxy external connections | Database proxy |
| Adapter | Standardize output | Metrics conversion |
Sidecar example:
apiVersion: v1
kind: Pod
metadata:
name: app-with-logging
spec:
containers:
- name: app
image: my-app:1.0
volumeMounts:
- name: logs
mountPath: /var/log/app
- name: log-collector
image: fluentd:latest
volumeMounts:
- name: logs
mountPath: /var/log/app
volumes:
- name: logs
emptyDir: {}
Main container writes logs, Sidecar container collects logs. Both containers share the same Volume.
Pod Lifecycle
Pods have clear lifecycle states:
| State | Description |
|---|---|
| Pending | Accepted, but containers not ready |
| Running | At least one container is running |
| Succeeded | All containers completed successfully |
| Failed | At least one container failed |
| Unknown | Cannot get Pod status |
Lifecycle flow:
Create Pod
↓
Pending (scheduling, downloading images)
↓
Running (containers executing)
↓
Succeeded / Failed / Deleted
Important concept:
Pods don't "repair" themselves. If a Pod dies, it's dead. A new Pod will be created to replace it (if you're using Deployment).
Complete Pod YAML Example
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
environment: production
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
restartPolicy: Always
Important field descriptions:
| Field | Purpose |
|---|---|
labels | Labels for selection and classification |
resources | Resource requests and limits |
env | Environment variables |
livenessProbe | Liveness check |
readinessProbe | Readiness check |
restartPolicy | Restart policy |
Deployment Deep Dive
In practice, you almost never create Pods directly. You create Deployments.
Why Deployment
Problems with creating Pods directly:
| Problem | Description |
|---|---|
| No auto-restart | Pod dies, it's gone |
| No scaling mechanism | Can't easily increase/decrease count |
| Updates are troublesome | Must manually delete old, create new |
| No version control | Can't rollback |
Deployment solves these problems:
| Feature | Description |
|---|---|
| Auto-maintain count | One dies, one is created |
| Declarative scaling | Just change a number |
| Rolling updates | Zero-downtime updates |
| Version history | Can quickly rollback |
Role of ReplicaSet
Deployment doesn't directly manage Pods—it manages ReplicaSets, and ReplicaSets manage Pods.
Deployment
↓ manages
ReplicaSet
↓ manages
Pod, Pod, Pod
Why this design?
During updates, Deployment creates a new ReplicaSet and gradually transfers Pods from the old ReplicaSet to the new one. The old ReplicaSet is kept for easy rollback.
Deployment YAML Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
Key fields:
| Field | Description |
|---|---|
replicas | How many Pods to maintain |
selector | What labels to select Pods with |
template | Pod template |
strategy | Update strategy |
Rolling Update Mechanism
When you update a Deployment (e.g., change the image):
kubectl set image deployment/my-app app=my-app:2.0
Rolling update process:
Initial state: 3 v1 Pods
Step 1: Create 1 v2 Pod
v1: ●●●
v2: ●
Step 2: After v2 is Ready, delete 1 v1
v1: ●●
v2: ●
Step 3: Create 2nd v2
v1: ●●
v2: ●●
Step 4: Delete 1 v1
v1: ●
v2: ●●
Step 5: Create 3rd v2
v1: ●
v2: ●●●
Step 6: Delete last v1
v1:
v2: ●●●
Complete!
Strategy parameters:
| Parameter | Description | Example Value |
|---|---|---|
maxSurge | Max extra Pods allowed | 1 or 25% |
maxUnavailable | Max missing Pods allowed | 0 or 25% |
Rollback Operations
Update has problems? Rollback quickly.
# View update history
kubectl rollout history deployment/my-app
# Rollback to previous version
kubectl rollout undo deployment/my-app
# Rollback to specific version
kubectl rollout undo deployment/my-app --to-revision=2
# Check rollback status
kubectl rollout status deployment/my-app
Need Kubernetes architecture planning?
Correct object design affects system maintainability and stability. Let experts help you plan.
Service Network Access
We have Pods, but how does the outside world access them? That's Service's job.
Necessity of Service
Problem: Pod IPs change.
- Pod IP changes after restart
- Pod IP changes when rescheduled to another Node
- New Pods have new IPs when scaling
Service provides stable access points:
| Feature | Description |
|---|---|
| Fixed IP | Service's ClusterIP doesn't change |
| DNS name | Access by name, like my-service |
| Load balancing | Auto-distribute traffic to backend Pods |
ClusterIP, NodePort, LoadBalancer
Kubernetes provides three Service types:
| Type | Access Method | Use Case |
|---|---|---|
| ClusterIP | Cluster internal IP | Internal services (default) |
| NodePort | Node IP + Port | Testing, development |
| LoadBalancer | External load balancer | Production external services |
ClusterIP example:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
Only accessible from inside cluster: http://my-app-service:80
NodePort example:
apiVersion: v1
kind: Service
metadata:
name: my-app-nodeport
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080
Accessible from external: http://<node-ip>:30080
LoadBalancer example:
apiVersion: v1
kind: Service
metadata:
name: my-app-lb
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
Cloud will automatically create load balancer and assign external IP.
Service and Endpoints
How does Service know which Pods to send traffic to?
Through Label Selector:
# Service
spec:
selector:
app: my-app # Select Pods with this label
Endpoints auto-maintained:
Kubernetes automatically creates and maintains Endpoint objects, recording Pod IPs that match the selector.
# View Endpoints
kubectl get endpoints my-app-service
Output:
NAME ENDPOINTS AGE
my-app-service 10.1.0.5:8080,10.1.0.6:8080,10.1.0.7:8080 1h
Traffic path:
Client → Service IP → kube-proxy → Pod IP
ConfigMap and Secret
Applications need configuration. Hardcoding config in containers isn't a good idea.
Externalizing Configuration
Why externalize configuration?
| Problem | Description |
|---|---|
| Environment differences | Dev, test, prod have different settings |
| Security | Passwords shouldn't be in code |
| Flexibility | Change config without rebuilding image |
Kubernetes solution:
| Object | Purpose |
|---|---|
| ConfigMap | General settings (non-sensitive) |
| Secret | Sensitive information (passwords, keys) |
ConfigMap Usage
Create ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "db.example.com"
DATABASE_PORT: "5432"
LOG_LEVEL: "info"
config.json: |
{
"feature_flags": {
"new_ui": true
}
}
Usage Method 1: Environment Variables
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:1.0
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DATABASE_HOST
# Or load all at once
envFrom:
- configMapRef:
name: app-config
Usage Method 2: Mount as Files
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:1.0
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
ConfigMap contents become files under /etc/config/.
Secret Usage
Create Secret:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
username: YWRtaW4= # base64 encoded "admin"
password: cGFzc3dvcmQ= # base64 encoded "password"
Create with kubectl (more convenient):
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password=password
Usage:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:1.0
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
Secret Security
Important reminders:
| Fact | Description |
|---|---|
| Base64 is not encryption | Anyone can decode |
| etcd not encrypted by default | Secrets stored in plaintext |
| Extra measures needed | Enable etcd encryption, use external tools |
Ways to enhance security:
| Method | Description |
|---|---|
| etcd encryption | Enable encryption at rest |
| RBAC | Restrict who can read Secrets |
| External solutions | HashiCorp Vault, AWS Secrets Manager |
| Sealed Secrets | Safe to store in Git |
Having K8s config management difficulties?
Correct config management is the foundation of stable operations. Let us help you establish best practices.
Other Important Objects
Besides the core objects above, there are several commonly used objects to know.
Namespace
Purpose: Logically isolate resources
apiVersion: v1
kind: Namespace
metadata:
name: production
Use cases:
| Scenario | Example |
|---|---|
| Environment isolation | dev, staging, production |
| Team isolation | team-a, team-b |
| Project isolation | project-x, project-y |
Common commands:
# List all namespaces
kubectl get namespaces
# Operate in specific namespace
kubectl get pods -n production
# Set default namespace
kubectl config set-context --current --namespace=production
PersistentVolume and PVC
Pods are ephemeral, but some data needs persistent storage.
PersistentVolume (PV): Storage space provided by cluster administrator
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /data/my-pv
PersistentVolumeClaim (PVC): User request for storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
Use in Pod:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:1.0
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
Ingress
Purpose: HTTP/HTTPS routing, more flexible than LoadBalancer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: web.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Advantages:
| Feature | Description |
|---|---|
| Domain routing | Different domains to different services |
| Path routing | /api and /web to different services |
| TLS termination | Unified HTTPS handling |
| Cost | Multiple services share one LB |
For more detailed network explanation, see Kubernetes Network Architecture Guide.
DaemonSet, StatefulSet, Job
DaemonSet: Run one Pod on every Node
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
spec:
selector:
matchLabels:
app: log-collector
template:
metadata:
labels:
app: log-collector
spec:
containers:
- name: fluentd
image: fluentd:latest
Use cases: Log collection, monitoring agents, network plugins
StatefulSet: Stateful applications
| Feature | Description |
|---|---|
| Stable network ID | pod-0, pod-1, pod-2 |
| Stable storage | Each Pod has its own PVC |
| Ordered deployment | Start and stop in order |
Use cases: Databases, distributed systems (Kafka, Elasticsearch)
Job / CronJob: One-time or scheduled tasks
# Job: Run once
apiVersion: batch/v1
kind: Job
metadata:
name: backup-job
spec:
template:
spec:
containers:
- name: backup
image: backup-tool:1.0
restartPolicy: Never
# CronJob: Run on schedule
apiVersion: batch/v1
kind: CronJob
metadata:
name: daily-backup
spec:
schedule: "0 2 * * *" # 2 AM daily
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: backup-tool:1.0
restartPolicy: Never
FAQ: Common Questions
Q1: What's the difference between Pod and container?
Container is the actual running program. Pod is Kubernetes' scheduling unit, containing one or more containers.
Think of Pod as a "shell" for containers, providing shared network and storage.
Q2: When to use multi-container Pods?
Only when containers need to work closely together.
Criteria:
- Must these containers run together?
- Do they need to share filesystem?
- Is their lifecycle the same?
If all answers are "yes," use multi-container Pod. Otherwise use multiple single-container Pods.
Q3: What's the difference between Deployment and ReplicaSet?
ReplicaSet only maintains Pod count. Deployment adds rolling updates, rollback, and other features on top of ReplicaSet.
In practice, you'll almost always use Deployment directly.
Q4: ClusterIP, NodePort, LoadBalancer—how to choose?
| Scenario | Choice |
|---|---|
| Internal service | ClusterIP |
| Development testing | NodePort |
| Production external | LoadBalancer or Ingress |
Q5: If ConfigMap changes, does Pod auto-update?
Depends on usage method:
| Method | Auto-update |
|---|---|
| Environment variables | ❌ Need to restart Pod |
| Volume mount | ✅ Will auto-update (with delay) |
If using environment variables, Pod must be restarted after changing ConfigMap.
Next Steps
After learning these core objects, you can:
| Goal | Action |
|---|---|
| Hands-on practice | Read Kubernetes Getting Started Tutorial |
| Deep dive networking | Read Kubernetes Network Architecture Guide |
| Understand architecture | Read Kubernetes Architecture Complete Guide |
| Choose tools | Read Kubernetes Tool Ecosystem Guide |
Need Kubernetes adoption assistance?
From core object design to production deployment, CloudInsight provides complete technical support.
Further Reading
- Kubernetes Complete Guide - K8s introduction overview
- Kubernetes Architecture Complete Guide - Control Plane and Nodes
- Kubernetes Getting Started Tutorial - Practical step-by-step guide
- Kubernetes Network Architecture Guide - Service, Ingress explained
- Kubernetes Tool Ecosystem Guide - Helm, monitoring, CI/CD
References
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
Kubernetes Architecture Complete Guide: Control Plane, Node & Components Explained
Deep dive into Kubernetes architecture. From the four Control Plane components to Worker Node operations, including complete explanations of API Server, etcd, Scheduler, and kubelet.
KubernetesWhat is Kubernetes? K8s Complete Guide: Architecture, Tutorial & Practical Introduction [2025 Updated]
Kubernetes (K8s) complete beginner's guide. From basic concepts, core architecture to practical deployment, understand the container orchestration platform in one read. Includes Docker comparison, cloud service selection, and learning resource recommendations.
KubernetesKubernetes Network Architecture Complete Guide: CNI, Service & Ingress Explained
Complete analysis of Kubernetes network architecture. From CNI plugins, Service types to Ingress configuration, including Network Policy and common troubleshooting.