Back to HomeKubernetes

Kubernetes Architecture Complete Guide: Control Plane, Node & Components Explained

14 min min read
#Kubernetes#K8s#Architecture#Control Plane#Worker Node#API Server#etcd#kubelet

Kubernetes Architecture Complete Guide: Control Plane, Node & Components Explained

Kubernetes Architecture Complete Guide: Control Plane, Node & Components Explained

To master Kubernetes, the first step is understanding its architecture.

Many people get stuck here: seeing a bunch of component names—API Server, etcd, kubelet—without knowing what each does or how they work together.

This article will walk you through every important Kubernetes component in the clearest way possible.

For a basic introduction to Kubernetes, see Kubernetes Complete Guide.


Architecture Overview: Control Plane vs Worker Node

A Kubernetes cluster consists of two types of roles:

RoleFunctionAnalogy
Control PlaneDecision centerCompany headquarters
Worker NodeExecute workBranch offices

Control Plane

The Control Plane is the brain of Kubernetes.

It's responsible for:

  • Receiving user commands
  • Storing the state of the entire cluster
  • Deciding where Pods should run
  • Monitoring and maintaining the desired state

Components included:

  • API Server
  • etcd
  • Scheduler
  • Controller Manager

Worker Node

Worker Nodes are where containers actually run.

They're responsible for:

  • Running assigned Pods
  • Reporting Pod status to the Control Plane
  • Handling network traffic

Components included:

  • kubelet
  • kube-proxy
  • Container Runtime

Architecture Diagram

┌─────────────────────────────────────────────────────────────┐
│                      Control Plane                          │
│  ┌──────────────┐  ┌────────┐  ┌───────────┐  ┌──────────┐ │
│  │  API Server  │  │  etcd  │  │ Scheduler │  │Controller│ │
│  │              │  │        │  │           │  │ Manager  │ │
│  └──────────────┘  └────────┘  └───────────┘  └──────────┘ │
└─────────────────────────────────────────────────────────────┘
                              │
                              │ API Communication
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                       Worker Nodes                          │
│  ┌─────────────────────────┐  ┌─────────────────────────┐  │
│  │       Node 1            │  │       Node 2            │  │
│  │  ┌───────┐ ┌─────────┐  │  │  ┌───────┐ ┌─────────┐  │  │
│  │  │kubelet│ │kube-proxy│  │  │  │kubelet│ │kube-proxy│  │  │
│  │  └───────┘ └─────────┘  │  │  └───────┘ └─────────┘  │  │
│  │  ┌─────────────────┐    │  │  ┌─────────────────┐    │  │
│  │  │Container Runtime│    │  │  │Container Runtime│    │  │
│  │  └─────────────────┘    │  │  └─────────────────┘    │  │
│  │  ┌─────┐ ┌─────┐        │  │  ┌─────┐ ┌─────┐        │  │
│  │  │ Pod │ │ Pod │        │  │  │ Pod │ │ Pod │        │  │
│  │  └─────┘ └─────┘        │  │  └─────┘ └─────┘        │  │
│  └─────────────────────────┘  └─────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Key understanding:

All communication between components goes through the API Server. No component directly communicates with another (except API Server accessing etcd).


Control Plane Four Core Components Explained

The Control Plane has four core components, each with clear responsibilities.

API Server (kube-apiserver)

One-sentence explanation: The front door of Kubernetes—all operations must go through it.

Responsibilities:

FunctionDescription
Request entry pointAll kubectl commands are sent here
Request validationChecks if requests are valid
Authorization checkConfirms user has permission to execute
Data accessOnly component that can access etcd

How it works:

User → kubectl → API Server → etcd
                       ↓
              Other components Watch for changes

Important characteristics:

  • RESTful API: All operations are standard HTTP requests
  • Watch mechanism: Other components can subscribe to resource changes
  • Horizontal scaling: Can run multiple API Servers for load balancing

Practical operations:

# Directly call API Server
kubectl get --raw /api/v1/namespaces/default/pods

# View API Server endpoint
kubectl cluster-info

etcd

One-sentence explanation: Kubernetes' database, storing the entire cluster state.

Responsibilities:

FunctionDescription
State storageAll resource configurations and states
DistributedMultiple nodes ensure high availability
ConsistencyUses Raft protocol to guarantee data consistency

What it stores:

  • Definitions of all resources like Pods, Deployments, Services
  • ConfigMap and Secret contents
  • Cluster configuration information

What it doesn't store:

  • Container logs
  • Application data
  • Monitoring metrics

Why it's important:

If etcd goes down, the entire cluster cannot function.

Backup recommendation:

# Backup etcd
ETCDCTL_API=3 etcdctl snapshot save backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

Scheduler (kube-scheduler)

One-sentence explanation: Decides which Node a Pod should run on.

Responsibilities:

FunctionDescription
Monitor unscheduled PodsWatch for Pods without assigned Nodes
Evaluate NodesFilter and score based on conditions
Bind PodsAssign Pods to the most suitable Node

Scheduling flow:

New Pod created (no Node specified)
        ↓
    Filtering phase
    - Enough resources?
    - Any taints?
    - Affinity rules?
        ↓
    Scoring phase
    - Resource utilization
    - Pod distribution
    - Custom priorities
        ↓
    Select highest-scoring Node
        ↓
    Bind Pod to Node

Filtering conditions (Predicates):

ConditionDescription
PodFitsResourcesIs CPU/memory sufficient
PodFitsHostPortsAny host port conflicts
PodMatchNodeSelectorMatches NodeSelector
PodToleratesNodeTaintsCan tolerate Node taints

Scoring factors (Priorities):

FactorDescription
LeastRequestedPriorityPrefer nodes with lower resource usage
BalancedResourceAllocationCPU and memory usage should be balanced
SelectorSpreadPriorityPods from same Service should be distributed

Controller Manager (kube-controller-manager)

One-sentence explanation: Ensures the cluster's actual state matches the desired state.

Responsibilities:

FunctionDescription
Monitor stateContinuously Watch resource changes
Compare differencesActual state vs desired state
Execute adjustmentsMake actual state approach desired state

Controllers included:

ControllerResponsibility
Deployment ControllerManage Deployments and ReplicaSets
ReplicaSet ControllerMaintain Pod replica count
Node ControllerMonitor Node status
Service ControllerManage cloud load balancers
Endpoint ControllerMaintain Service and Pod mappings
Namespace ControllerHandle Namespace deletion

Control loop concept:

while true:
    current_state = get from API Server
    desired_state = get from resource definition
    if current_state != desired_state:
        execute adjustment action
    sleep(short interval)

Example: Deployment Controller

You say "I want 3 Pods":

  1. Controller sees currently 0 Pods
  2. Creates a new Pod
  3. Sees currently 1 Pod
  4. Creates another
  5. Sees currently 2 Pods
  6. Creates another
  7. Sees currently 3 Pods, matches desired state
  8. Continues monitoring, creates new Pods if any die

Need Kubernetes architecture design?

From proof of concept to production environment, we help enterprises design K8s architectures that meet their needs.

Book an architecture consultation


Worker Node Component Analysis

Each Worker Node runs three core components.

kubelet

One-sentence explanation: The agent on the Node, responsible for managing all Pods on that Node.

Responsibilities:

FunctionDescription
Receive commandsGet Pod specs from API Server
Start containersExecute through Container Runtime
Health checksRun Liveness and Readiness Probes
Report statusPeriodically report to API Server

Operation flow:

API Server
    ↓ Watch PodSpec
kubelet
    ↓ Call CRI
Container Runtime
    ↓ Create container
Container running

Health check types:

TypePurposeOn Failure
Liveness ProbeIs container aliveRestart container
Readiness ProbeIs container readyRemove from Service
Startup ProbeIs startup completeDelay other checks

Important configuration:

# kubelet configuration example
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 110
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80

kube-proxy

One-sentence explanation: Handles network rules, implementing Service load balancing.

Responsibilities:

FunctionDescription
Maintain network rulesiptables or IPVS rules
Implement ServicesClusterIP, NodePort, LoadBalancer
Load balancingDistribute traffic to backend Pods

Operating modes:

ModeDescriptionPerformance
iptablesDefault modeMedium
IPVSLarge-scale clustersBetter
userspaceLegacy modePoor

iptables mode operation:

External traffic → Service IP → iptables rules → Pod IP

kube-proxy monitors Service and Endpoint changes, automatically updating iptables rules.

Container Runtime

One-sentence explanation: The program that actually runs containers.

Common choices:

RuntimeDescriptionUse Case
containerdOpen-sourced by DockerMost common, default choice
CRI-ODeveloped by Red HatOpenShift default
DockerNo longer directly supportedVia cri-dockerd

Why doesn't Kubernetes directly support Docker anymore?

Kubernetes 1.24 removed dockershim. But this doesn't mean you can't use Docker images.

Key understanding:

  • Docker images (OCI format): ✅ Fully supported
  • Docker as Runtime: ❌ Requires additional cri-dockerd

In most cases, just use containerd directly.


How Components Communicate

Understanding how components communicate is key to deeply understanding Kubernetes.

Communication Principles

Core principle: All communication goes through the API Server.

Communication DirectionDescription
User → API Serverkubectl or API calls
API Server → etcdRead/write cluster state
Controller → API ServerWatch resource changes
Scheduler → API ServerGet unscheduled Pods
kubelet → API ServerReport Node and Pod status

Why this design?

  • Unified access point for easier authentication and authorization
  • All operations are recorded
  • Easy to scale horizontally

Request Flow Analysis

Example: Creating a Deployment

kubectl apply -f deployment.yaml

Complete flow:

1. kubectl sends POST request to API Server
   POST /apis/apps/v1/namespaces/default/deployments

2. API Server validates and processes
   - Validate YAML format
   - Check user permissions
   - Write to etcd

3. Deployment Controller receives notification
   - Watch mechanism detects new Deployment
   - Creates corresponding ReplicaSet

4. ReplicaSet Controller receives notification
   - Detects new ReplicaSet
   - Creates specified number of Pods

5. Scheduler receives notification
   - Detects unscheduled Pods
   - Selects suitable Node
   - Updates Pod's NodeName

6. kubelet receives notification
   - Detects assigned Pod
   - Calls Container Runtime to start container
   - Reports Pod status

Watch Mechanism

Watch is one of Kubernetes' core mechanisms.

How it works:

Controller                    API Server
    │                              │
    │── Watch /api/v1/pods ───────▶│
    │                              │
    │◀─── Initial data list ───────│
    │                              │
    │◀─── Change event (add) ──────│
    │◀─── Change event (modify) ───│
    │◀─── Change event (delete) ───│
    │                              │

Advantages:

  • Real-time notifications, no polling needed
  • Reduces API Server load
  • Reduces network traffic

Reconciliation Loop

Each Controller runs a control loop:

┌────────────────────────────────────────┐
│                                        │
│   ┌──────────┐      ┌──────────┐      │
│   │ Desired  │      │  Actual  │      │
│   │  State   │      │  State   │      │
│   │  (Spec)  │      │ (Status) │      │
│   └────┬─────┘      └────┬─────┘      │
│        │                  │            │
│        └───────┬──────────┘            │
│                ▼                       │
│         ┌──────────┐                   │
│         │  Compare │                   │
│         └────┬─────┘                   │
│              │                         │
│              ▼                         │
│    ┌─────────────────┐                 │
│    │ Difference?     │                 │
│    │ Execute adjust  │                 │
│    └─────────────────┘                 │
│                                        │
└────────────────────────────────────────┘

This is the essence of "declarative":

You say "what you want," and Kubernetes figures out how to achieve it.


Having K8s architecture issues?

Architecture design affects future operational costs and stability. Let experts help you review.

Book a free consultation


High Availability Architecture Design

Production environments require high availability architecture.

Single Point of Failure Risks

ComponentWhat happens if it fails
API ServerCan't issue commands, but existing Pods keep running
etcdCluster completely non-functional
SchedulerNew Pods can't be scheduled
Controller ManagerCan't auto-repair
kubeletPods on that Node can't be managed

Multi-Master Architecture

Production environments should have at least 3 Control Plane nodes.

Architecture diagram:

              Load Balancer
                   │
       ┌───────────┼───────────┐
       ▼           ▼           ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Master 1 │ │ Master 2 │ │ Master 3 │
│          │ │          │ │          │
│API Server│ │API Server│ │API Server│
│Scheduler │ │Scheduler │ │Scheduler │
│Controller│ │Controller│ │Controller│
│  etcd    │ │  etcd    │ │  etcd    │
└──────────┘ └──────────┘ └──────────┘

How it works:

ComponentHA Mechanism
API ServerMultiple instances with load balancer in front
etcdRaft consensus, requires majority of nodes
SchedulerLeader Election, only one is active
Controller ManagerLeader Election, only one is active

etcd Cluster

etcd uses the Raft consensus protocol, requiring an odd number of nodes.

Node CountFault ToleranceRecommendation
10Test environment
31Minimum for production
52Large production
73Very large scale

Why odd numbers?

Raft requires majority agreement. 3 nodes can tolerate 1 failure (2 > 3/2), 4 nodes can also only tolerate 1 failure (3 > 4/2).

Production Environment Recommendations

Control Plane:

ItemRecommendation
Node countAt least 3
Resources4 CPU, 16GB RAM or more
DiskSSD, especially for etcd
NetworkLow latency, stable

etcd:

ItemRecommendation
Separate deploymentConsider for large scale
BackupRegular automatic backup
MonitoringMonitor latency and disk usage

Worker Node:

ItemRecommendation
CountBased on load
DistributionAcross availability zones
Resource reservationReserve resources for system components

For more cloud service architecture choices, see Kubernetes Cloud Services Complete Comparison.


FAQ: Common Questions

Q1: Can Control Plane run Pods?

Not by default, but can be configured.

Control Plane nodes have Taints by default that prevent regular Pods from being scheduled there.

# View Taints
kubectl describe node master-node | grep Taints

# Remove Taint (not recommended for production)
kubectl taint nodes master-node node-role.kubernetes.io/control-plane:NoSchedule-

It's recommended to let Control Plane focus on management in production.

Q2: Should etcd data be backed up?

Absolutely.

etcd stores the entire cluster state; nothing can be recovered without it.

Backup strategy:

  • Regular automatic backup (at least once daily)
  • Backup to remote storage
  • Periodically test restoration

Q3: Why do Scheduler and Controller Manager need Leader Election?

To avoid conflicts.

Imagine if 3 Schedulers were running simultaneously—the same Pod might be assigned to different Nodes, causing chaos.

Leader Election ensures only one instance is making decisions at a time; others are standby.

Q4: What happens if kubelet fails?

Pods on that Node become orphaned.

  • Pods will continue running (unless the container itself fails)
  • But can't be restarted, updated, or health-checked
  • Control Plane will mark the Node as NotReady
  • After a period, Pods will be rescheduled to other Nodes

Q5: How to check component status?

# Check component status
kubectl get componentstatuses

# Check Node status
kubectl get nodes

# Check system Pods
kubectl get pods -n kube-system

# Check etcd status (if you have permission)
kubectl exec -n kube-system etcd-master -- etcdctl endpoint health

Next Steps

After understanding Kubernetes architecture, you can:

GoalAction
Learn core objectsRead Kubernetes Core Objects Tutorial
Understand network architectureRead Kubernetes Network Architecture Guide
Choose cloud servicesRead Kubernetes Cloud Services Comparison
Hands-on practiceRead Kubernetes Getting Started Tutorial

Need professional Kubernetes architecture consulting?

Whether building new clusters or optimizing existing architecture, CloudInsight provides professional assistance.

Book a consultation now


Further Reading


References

Need Professional Cloud Advice?

Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help

Book Free Consultation

Related Articles