Back to HomeKubernetes

Kubernetes Core Objects Complete Tutorial: Master Pod, Deployment & Service

13 min min read
#Kubernetes#K8s#Pod#Deployment#Service#ConfigMap#Secret#Core Objects

Kubernetes Core Objects Complete Tutorial: Master Pod, Deployment & Service

Kubernetes Core Objects Complete Tutorial: Master Pod, Deployment & Service

To learn Kubernetes, the most important thing is understanding "objects."

Pod, Deployment, Service... these terms are easy to confuse at first. But once you understand their respective roles, Kubernetes becomes much clearer.

This article will walk you through the most important core objects in Kubernetes.

For a basic introduction to Kubernetes, see Kubernetes Complete Guide. For architecture explanation, see Kubernetes Architecture Complete Guide.


Pod Complete Analysis

Pod is the most basic deployment unit in Kubernetes.

What is a Pod

One-sentence explanation: A Pod is a combination of one or more containers, the smallest unit Kubernetes schedules.

Key understanding:

  • You don't directly deploy containers—you deploy Pods
  • Containers in a Pod share network and storage
  • Pods are ephemeral and can be deleted and recreated at any time
FeatureDescription
Smallest unitKubernetes doesn't manage individual containers, it manages Pods
Shared networkContainers in same Pod communicate via localhost
Shared storageCan mount the same Volume
EphemeralWhen a Pod dies, it's gone

Single-Container vs Multi-Container Pod

Single-container Pod (most common):

Only one container in a Pod. This is the case 90%+ of the time.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.25
    ports:
    - containerPort: 80

Multi-container Pod:

Used only when multiple containers need to work closely together.

PatternDescriptionExample
SidecarAssists main containerLog collection, proxy
AmbassadorProxy external connectionsDatabase proxy
AdapterStandardize outputMetrics conversion

Sidecar example:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-logging
spec:
  containers:
  - name: app
    image: my-app:1.0
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
  - name: log-collector
    image: fluentd:latest
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
  volumes:
  - name: logs
    emptyDir: {}

Main container writes logs, Sidecar container collects logs. Both containers share the same Volume.

Pod Lifecycle

Pods have clear lifecycle states:

StateDescription
PendingAccepted, but containers not ready
RunningAt least one container is running
SucceededAll containers completed successfully
FailedAt least one container failed
UnknownCannot get Pod status

Lifecycle flow:

Create Pod
    ↓
Pending (scheduling, downloading images)
    ↓
Running (containers executing)
    ↓
Succeeded / Failed / Deleted

Important concept:

Pods don't "repair" themselves. If a Pod dies, it's dead. A new Pod will be created to replace it (if you're using Deployment).

Complete Pod YAML Example

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: my-app
    environment: production
spec:
  containers:
  - name: app
    image: my-app:1.0
    ports:
    - containerPort: 8080
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "256Mi"
        cpu: "500m"
    env:
    - name: DATABASE_URL
      valueFrom:
        secretKeyRef:
          name: db-secret
          key: url
    livenessProbe:
      httpGet:
        path: /health
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
  restartPolicy: Always

Important field descriptions:

FieldPurpose
labelsLabels for selection and classification
resourcesResource requests and limits
envEnvironment variables
livenessProbeLiveness check
readinessProbeReadiness check
restartPolicyRestart policy

Deployment Deep Dive

In practice, you almost never create Pods directly. You create Deployments.

Why Deployment

Problems with creating Pods directly:

ProblemDescription
No auto-restartPod dies, it's gone
No scaling mechanismCan't easily increase/decrease count
Updates are troublesomeMust manually delete old, create new
No version controlCan't rollback

Deployment solves these problems:

FeatureDescription
Auto-maintain countOne dies, one is created
Declarative scalingJust change a number
Rolling updatesZero-downtime updates
Version historyCan quickly rollback

Role of ReplicaSet

Deployment doesn't directly manage Pods—it manages ReplicaSets, and ReplicaSets manage Pods.

Deployment
    ↓ manages
ReplicaSet
    ↓ manages
Pod, Pod, Pod

Why this design?

During updates, Deployment creates a new ReplicaSet and gradually transfers Pods from the old ReplicaSet to the new one. The old ReplicaSet is kept for easy rollback.

Deployment YAML Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:1.0
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

Key fields:

FieldDescription
replicasHow many Pods to maintain
selectorWhat labels to select Pods with
templatePod template
strategyUpdate strategy

Rolling Update Mechanism

When you update a Deployment (e.g., change the image):

kubectl set image deployment/my-app app=my-app:2.0

Rolling update process:

Initial state: 3 v1 Pods

Step 1: Create 1 v2 Pod
v1: ●●●
v2: ●

Step 2: After v2 is Ready, delete 1 v1
v1: ●●
v2: ●

Step 3: Create 2nd v2
v1: ●●
v2: ●●

Step 4: Delete 1 v1
v1: ●
v2: ●●

Step 5: Create 3rd v2
v1: ●
v2: ●●●

Step 6: Delete last v1
v1:
v2: ●●●

Complete!

Strategy parameters:

ParameterDescriptionExample Value
maxSurgeMax extra Pods allowed1 or 25%
maxUnavailableMax missing Pods allowed0 or 25%

Rollback Operations

Update has problems? Rollback quickly.

# View update history
kubectl rollout history deployment/my-app

# Rollback to previous version
kubectl rollout undo deployment/my-app

# Rollback to specific version
kubectl rollout undo deployment/my-app --to-revision=2

# Check rollback status
kubectl rollout status deployment/my-app

Need Kubernetes architecture planning?

Correct object design affects system maintainability and stability. Let experts help you plan.

Book architecture consultation


Service Network Access

We have Pods, but how does the outside world access them? That's Service's job.

Necessity of Service

Problem: Pod IPs change.

  • Pod IP changes after restart
  • Pod IP changes when rescheduled to another Node
  • New Pods have new IPs when scaling

Service provides stable access points:

FeatureDescription
Fixed IPService's ClusterIP doesn't change
DNS nameAccess by name, like my-service
Load balancingAuto-distribute traffic to backend Pods

ClusterIP, NodePort, LoadBalancer

Kubernetes provides three Service types:

TypeAccess MethodUse Case
ClusterIPCluster internal IPInternal services (default)
NodePortNode IP + PortTesting, development
LoadBalancerExternal load balancerProduction external services

ClusterIP example:

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: ClusterIP
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080

Only accessible from inside cluster: http://my-app-service:80

NodePort example:

apiVersion: v1
kind: Service
metadata:
  name: my-app-nodeport
spec:
  type: NodePort
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080

Accessible from external: http://<node-ip>:30080

LoadBalancer example:

apiVersion: v1
kind: Service
metadata:
  name: my-app-lb
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080

Cloud will automatically create load balancer and assign external IP.

Service and Endpoints

How does Service know which Pods to send traffic to?

Through Label Selector:

# Service
spec:
  selector:
    app: my-app  # Select Pods with this label

Endpoints auto-maintained:

Kubernetes automatically creates and maintains Endpoint objects, recording Pod IPs that match the selector.

# View Endpoints
kubectl get endpoints my-app-service

Output:

NAME             ENDPOINTS                                   AGE
my-app-service   10.1.0.5:8080,10.1.0.6:8080,10.1.0.7:8080  1h

Traffic path:

Client → Service IP → kube-proxy → Pod IP

ConfigMap and Secret

Applications need configuration. Hardcoding config in containers isn't a good idea.

Externalizing Configuration

Why externalize configuration?

ProblemDescription
Environment differencesDev, test, prod have different settings
SecurityPasswords shouldn't be in code
FlexibilityChange config without rebuilding image

Kubernetes solution:

ObjectPurpose
ConfigMapGeneral settings (non-sensitive)
SecretSensitive information (passwords, keys)

ConfigMap Usage

Create ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  DATABASE_HOST: "db.example.com"
  DATABASE_PORT: "5432"
  LOG_LEVEL: "info"
  config.json: |
    {
      "feature_flags": {
        "new_ui": true
      }
    }

Usage Method 1: Environment Variables

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
  - name: app
    image: my-app:1.0
    env:
    - name: DATABASE_HOST
      valueFrom:
        configMapKeyRef:
          name: app-config
          key: DATABASE_HOST
    # Or load all at once
    envFrom:
    - configMapRef:
        name: app-config

Usage Method 2: Mount as Files

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
  - name: app
    image: my-app:1.0
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
  volumes:
  - name: config-volume
    configMap:
      name: app-config

ConfigMap contents become files under /etc/config/.

Secret Usage

Create Secret:

apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
type: Opaque
data:
  username: YWRtaW4=      # base64 encoded "admin"
  password: cGFzc3dvcmQ=  # base64 encoded "password"

Create with kubectl (more convenient):

kubectl create secret generic db-credentials \
  --from-literal=username=admin \
  --from-literal=password=password

Usage:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
  - name: app
    image: my-app:1.0
    env:
    - name: DB_USERNAME
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: username
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: password

Secret Security

Important reminders:

FactDescription
Base64 is not encryptionAnyone can decode
etcd not encrypted by defaultSecrets stored in plaintext
Extra measures neededEnable etcd encryption, use external tools

Ways to enhance security:

MethodDescription
etcd encryptionEnable encryption at rest
RBACRestrict who can read Secrets
External solutionsHashiCorp Vault, AWS Secrets Manager
Sealed SecretsSafe to store in Git

Having K8s config management difficulties?

Correct config management is the foundation of stable operations. Let us help you establish best practices.

Book a free consultation


Other Important Objects

Besides the core objects above, there are several commonly used objects to know.

Namespace

Purpose: Logically isolate resources

apiVersion: v1
kind: Namespace
metadata:
  name: production

Use cases:

ScenarioExample
Environment isolationdev, staging, production
Team isolationteam-a, team-b
Project isolationproject-x, project-y

Common commands:

# List all namespaces
kubectl get namespaces

# Operate in specific namespace
kubectl get pods -n production

# Set default namespace
kubectl config set-context --current --namespace=production

PersistentVolume and PVC

Pods are ephemeral, but some data needs persistent storage.

PersistentVolume (PV): Storage space provided by cluster administrator

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  hostPath:
    path: /data/my-pv

PersistentVolumeClaim (PVC): User request for storage

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: standard

Use in Pod:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
  - name: app
    image: my-app:1.0
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: my-pvc

Ingress

Purpose: HTTP/HTTPS routing, more flexible than LoadBalancer

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80
  - host: web.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

Advantages:

FeatureDescription
Domain routingDifferent domains to different services
Path routing/api and /web to different services
TLS terminationUnified HTTPS handling
CostMultiple services share one LB

For more detailed network explanation, see Kubernetes Network Architecture Guide.

DaemonSet, StatefulSet, Job

DaemonSet: Run one Pod on every Node

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: log-collector
spec:
  selector:
    matchLabels:
      app: log-collector
  template:
    metadata:
      labels:
        app: log-collector
    spec:
      containers:
      - name: fluentd
        image: fluentd:latest

Use cases: Log collection, monitoring agents, network plugins

StatefulSet: Stateful applications

FeatureDescription
Stable network IDpod-0, pod-1, pod-2
Stable storageEach Pod has its own PVC
Ordered deploymentStart and stop in order

Use cases: Databases, distributed systems (Kafka, Elasticsearch)

Job / CronJob: One-time or scheduled tasks

# Job: Run once
apiVersion: batch/v1
kind: Job
metadata:
  name: backup-job
spec:
  template:
    spec:
      containers:
      - name: backup
        image: backup-tool:1.0
      restartPolicy: Never

# CronJob: Run on schedule
apiVersion: batch/v1
kind: CronJob
metadata:
  name: daily-backup
spec:
  schedule: "0 2 * * *"  # 2 AM daily
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: backup
            image: backup-tool:1.0
          restartPolicy: Never

FAQ: Common Questions

Q1: What's the difference between Pod and container?

Container is the actual running program. Pod is Kubernetes' scheduling unit, containing one or more containers.

Think of Pod as a "shell" for containers, providing shared network and storage.

Q2: When to use multi-container Pods?

Only when containers need to work closely together.

Criteria:

  • Must these containers run together?
  • Do they need to share filesystem?
  • Is their lifecycle the same?

If all answers are "yes," use multi-container Pod. Otherwise use multiple single-container Pods.

Q3: What's the difference between Deployment and ReplicaSet?

ReplicaSet only maintains Pod count. Deployment adds rolling updates, rollback, and other features on top of ReplicaSet.

In practice, you'll almost always use Deployment directly.

Q4: ClusterIP, NodePort, LoadBalancer—how to choose?

ScenarioChoice
Internal serviceClusterIP
Development testingNodePort
Production externalLoadBalancer or Ingress

Q5: If ConfigMap changes, does Pod auto-update?

Depends on usage method:

MethodAuto-update
Environment variables❌ Need to restart Pod
Volume mount✅ Will auto-update (with delay)

If using environment variables, Pod must be restarted after changing ConfigMap.


Next Steps

After learning these core objects, you can:

GoalAction
Hands-on practiceRead Kubernetes Getting Started Tutorial
Deep dive networkingRead Kubernetes Network Architecture Guide
Understand architectureRead Kubernetes Architecture Complete Guide
Choose toolsRead Kubernetes Tool Ecosystem Guide

Need Kubernetes adoption assistance?

From core object design to production deployment, CloudInsight provides complete technical support.

Book a consultation now


Further Reading


References

Need Professional Cloud Advice?

Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help

Book Free Consultation

Related Articles