Kubernetes Network Architecture Complete Guide: CNI, Service & Ingress Explained

Kubernetes Network Architecture Complete Guide: CNI, Service & Ingress Explained
Kubernetes networking is one of the most complex parts.
Pod IP, Service IP, ClusterIP, NodePort, Ingress... these concepts are easy to confuse. But once you understand the network model, everything becomes clear.
This article will fully analyze Kubernetes network architecture.
For a basic introduction to Kubernetes, see Kubernetes Complete Guide.
Kubernetes Network Model
Kubernetes network design follows several basic principles.
Basic Principles
Kubernetes networking has four core principles:
| Principle | Description |
|---|---|
| Pod to Pod | Any Pod can directly communicate with any other Pod without NAT |
| Node to Pod | Nodes can directly communicate with Pods without NAT |
| Pod sees its IP | The IP a Pod sees for itself is the same as what others see |
| Flat network | All Pods are in the same flat network space |
Why this design?
Simplify network configuration. Traditional container networking requires handling port mapping, which is complex. Kubernetes gives each Pod its own IP, like an independent machine.
IP Address Types
There are several types of IPs in Kubernetes:
| IP Type | Description | Example |
|---|---|---|
| Node IP | Node's real IP | 192.168.1.10 |
| Pod IP | Pod's IP, changes frequently | 10.244.1.5 |
| Cluster IP | Service's virtual IP | 10.96.0.100 |
| External IP | External-facing IP | 35.200.x.x |
IP range configuration:
Node network: 192.168.0.0/16 (company network)
Pod network: 10.244.0.0/16 (Kubernetes internal)
Service network: 10.96.0.0/12 (virtual IP)
Network Traffic Paths
Pod to Pod (same Node):
Pod A → veth → cbr0 (bridge) → veth → Pod B
Pod to Pod (cross Node):
Pod A → veth → cbr0 → Node A network → Node B network → cbr0 → veth → Pod B
External to Pod:
External traffic → Load Balancer → NodePort → Service → Pod
CNI: Container Network Interface
Kubernetes doesn't handle networking itself—it uses CNI plugins.
What is CNI
CNI (Container Network Interface) is the standard interface for container networking.
Kubernetes calls CNI plugins to:
- Create Pod networks
- Assign IP addresses
- Configure routing rules
Common CNI Plugins
| Plugin | Features | Use Case |
|---|---|---|
| Calico | Full-featured, supports Network Policy | Production first choice |
| Flannel | Simple, lightweight | Learning |
| Cilium | eBPF-based, high performance | Large scale, high performance |
| Weave | Simple, supports encryption | Small clusters |
| AWS VPC CNI | AWS native integration | EKS |
| Azure CNI | Azure native integration | AKS |
| GKE CNI | Google native integration | GKE |
Calico Explained
Calico is the most popular CNI plugin.
Features:
| Feature | Description |
|---|---|
| BGP routing | Uses standard routing protocol |
| Network Policy | Full support |
| Performance | Near-native network performance |
| Scalability | Suitable for large clusters |
Install Calico:
# Install Calico operator
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml
# Install Calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/custom-resources.yaml
Verify installation:
kubectl get pods -n calico-system
Cilium Explained
Cilium is a next-generation CNI based on eBPF technology.
Features:
| Feature | Description |
|---|---|
| eBPF | Kernel-level processing, excellent performance |
| Observability | Built-in Hubble monitoring |
| Service Mesh | Can replace sidecars |
| Security | Layer 7 network policies |
Best for:
- Large clusters (>1000 nodes)
- High performance requirements
- Advanced observability needs
Service: Service Discovery and Load Balancing
Pod IPs change; Service provides stable access points.
Purpose of Service
Problem: Pod IPs are unstable
| Situation | Does Pod IP change? |
|---|---|
| Pod restart | Yes |
| Pod rescheduled | Yes |
| Scale up/down | New Pod, new IP |
Service solution:
| Function | Description |
|---|---|
| Stable endpoint | Service IP doesn't change |
| DNS name | Access by name, like my-service |
| Load balancing | Auto-distribute to backend Pods |
| Service discovery | Auto-track Pod changes |
ClusterIP
Default type, only accessible from within the cluster.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP # Default, can be omitted
selector:
app: my-app
ports:
- port: 80 # Service port
targetPort: 8080 # Pod port
Access method:
# From Pod inside cluster
curl http://my-service:80
curl http://my-service.default.svc.cluster.local:80
DNS format:
<service-name>.<namespace>.svc.cluster.local
NodePort
Opens a port on every Node.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # Range: 30000-32767
Access method:
# From external
curl http://<node-ip>:30080
Characteristics:
| Pros | Cons |
|---|---|
| Simple, no LB needed | Limited port range |
| Any Node is accessible | Need to know Node IP |
| Good for testing | Not suitable for production |
LoadBalancer
Uses cloud load balancer.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
Cloud will automatically:
- Create load balancer
- Assign external IP
- Configure health checks
Access method:
# Get external IP
kubectl get svc my-service
# EXTERNAL-IP column
curl http://<external-ip>:80
Downsides:
| Downside | Description |
|---|---|
| Cost | One LB per Service, expensive |
| Not available locally | Requires cloud environment |
ExternalName
Maps to external DNS name.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: db.example.com
Use cases:
- Access external services
- Abstraction layer during migration
Headless Service
No load balancing needed, directly get Pod IPs.
apiVersion: v1
kind: Service
metadata:
name: my-headless
spec:
clusterIP: None # Key setting
selector:
app: my-app
ports:
- port: 80
Use cases:
- StatefulSet (like databases)
- Need to connect directly to specific Pods
Kubernetes network architecture design?
Correct network design affects performance and security. Let experts help you plan.
Ingress: HTTP Routing
LoadBalancer costs one per Service—too expensive. Ingress solves this.
Purpose of Ingress
Ingress provides:
| Function | Description |
|---|---|
| Path routing | /api → Service A, /web → Service B |
| Domain routing | api.example.com → Service A |
| TLS termination | Unified HTTPS handling |
| Cost savings | Multiple Services share one LB |
Ingress Controller
Ingress itself is just rule definition; Ingress Controller implements it.
| Controller | Features |
|---|---|
| NGINX Ingress | Most common, full-featured |
| Traefik | Auto service discovery |
| HAProxy | High performance |
| AWS ALB | AWS native |
| GKE Ingress | GCP native |
Install NGINX Ingress:
# Using Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx
Ingress Configuration Examples
Basic path routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Multi-domain routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host-ingress
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: web.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
TLS Configuration
Create TLS Secret:
kubectl create secret tls my-tls-secret \
--cert=path/to/cert.pem \
--key=path/to/key.pem
Use in Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: my-tls-secret
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
cert-manager Automatic Certificates
cert-manager can automatically obtain certificates from Let's Encrypt.
Install:
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager --set installCRDs=true
Configure Issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Auto-obtain certificate:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auto-tls-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls # cert-manager will auto-create
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
Network Policy: Network Security
By default, all Pods can communicate with each other. Network Policy can restrict this.
Why Network Policy
Default behavior: Any Pod can connect to any Pod
Problems:
- Database shouldn't be directly accessible from frontend
- Different namespaces should be isolated
- Only specific services should connect externally
Basic Concepts
Network Policy controls:
| Direction | Description |
|---|---|
| Ingress | Traffic entering a Pod |
| Egress | Traffic leaving a Pod |
Selectors:
| Selector | Description |
|---|---|
| podSelector | Select Pods to apply policy to |
| namespaceSelector | Select source/destination namespace |
| ipBlock | Select IP range |
Example: Restrict Ingress Traffic
Only allow specific Pods to access database:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
namespace: default
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 5432
This policy:
- Applies to Pods with
app: database - Only allows connections from Pods with
app: backend - Only opens port 5432
Example: Restrict Egress Traffic
Pod can only connect to specific external IPs:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-policy
spec:
podSelector:
matchLabels:
app: my-app
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- protocol: TCP
port: 443
Default Deny All
Security best practice: Default deny, explicitly allow.
# Default deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {} # Select all Pods
policyTypes:
- Ingress
# No ingress rules = deny all
# Default deny all egress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
# No egress rules = deny all
Important Notes
| Note | Description |
|---|---|
| CNI support | Not all CNIs support Network Policy |
| DNS access | Remember to allow DNS (kube-dns) |
| Testing | Verify in test environment first |
CNIs that support Network Policy:
- Calico ✅
- Cilium ✅
- Weave ✅
- Flannel ❌ (needs extra configuration)
Network security planning?
Correct Network Policy can significantly improve security. Let us help you design.
Network Troubleshooting
Network problems are hardest to debug. Here are common methods.
Common Issues
| Issue | Possible Cause |
|---|---|
| Pod can't reach Service | Wrong Service selector, empty Endpoints |
| External can't reach in | Wrong Ingress config, firewall |
| DNS resolution fails | CoreDNS issues, Network Policy blocking |
| Cross-Node connectivity fails | CNI issues, network config |
Debugging Commands
Check Service and Endpoints:
# View Service
kubectl get svc my-service -o wide
# View Endpoints (should have Pod IPs)
kubectl get endpoints my-service
# If Endpoints empty, check selector
kubectl describe svc my-service
Test connectivity:
# Create debug Pod
kubectl run debug --image=nicolaka/netshoot -it --rm -- bash
# Test inside Pod
curl http://my-service:80
nslookup my-service
ping <pod-ip>
Check DNS:
# View CoreDNS Pods
kubectl get pods -n kube-system -l k8s-app=kube-dns
# Test DNS resolution
kubectl run test --image=busybox -it --rm -- nslookup kubernetes
Check Network Policy:
# View Network Policies affecting Pods
kubectl get networkpolicy
# View details
kubectl describe networkpolicy <policy-name>
Common Debugging Tools
| Tool | Purpose |
|---|---|
| netshoot | Network debugging Swiss army knife |
| tcpdump | Packet capture |
| traceroute | Route tracing |
| curl/wget | HTTP testing |
| nslookup/dig | DNS testing |
FAQ: Common Questions
Q1: Pod IP changes, what to do?
Use Service.
Service provides stable IP and DNS name. Applications should connect to Service, not Pod IP.
Q2: How to access ClusterIP from external?
Can't directly access. ClusterIP only works inside the cluster.
External access methods:
- NodePort
- LoadBalancer
- Ingress
- kubectl port-forward (for debugging)
Q3: Why isn't Ingress working?
Common reasons:
| Reason | Check Method |
|---|---|
| No Ingress Controller installed | `kubectl get pods -A |
| Wrong ingressClassName | Check Ingress YAML |
| Service doesn't exist | kubectl get svc |
| DNS not pointing | Check DNS settings |
Q4: Can multiple Services share one LoadBalancer?
Use Ingress.
Ingress lets multiple Services share one entry point.
Q5: Does Network Policy affect performance?
Negligible.
Modern CNIs (like Calico, Cilium) handle Network Policy very efficiently. Unless rule count is extremely large, don't worry.
Next Steps
After understanding Kubernetes networking, you can:
| Goal | Action |
|---|---|
| Understand architecture | Read Kubernetes Architecture Complete Guide |
| Learn objects | Read Kubernetes Core Objects Tutorial |
| Hands-on practice | Read Kubernetes Getting Started Tutorial |
| Choose cloud | Read Kubernetes Cloud Services Comparison |
Need Kubernetes network architecture consulting?
From CNI selection to Ingress design, CloudInsight provides complete technical support.
Further Reading
- Kubernetes Complete Guide - K8s introduction overview
- Kubernetes Architecture Complete Guide - Control Plane and Nodes
- Kubernetes Core Objects Tutorial - Pod, Service explained
- Kubernetes Tool Ecosystem Guide - Service mesh and tools
- Kubernetes Cloud Services Comparison - Network differences across clouds
References
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
Kubernetes Core Objects Complete Tutorial: Master Pod, Deployment & Service
Complete tutorial on Kubernetes core objects. From Pod, Deployment, Service to ConfigMap and Secret, including YAML examples and practical operation guides.
KubernetesKubernetes Taiwan Community Complete Guide: CNTUG, KCD Taiwan & Learning Resources
A comprehensive introduction to the Kubernetes Taiwan community ecosystem, including CNTUG Cloud Native User Group, KCD Taiwan annual conference, tech Meetups, online communities, and learning resources to help you integrate into Taiwan's K8s tech circle.
KubernetesKubernetes Architecture Complete Guide: Control Plane, Node & Components Explained
Deep dive into Kubernetes architecture. From the four Control Plane components to Worker Node operations, including complete explanations of API Server, etcd, Scheduler, and kubelet.