OpenShift Installation Tutorial: From Local to Production Environment Complete Guide [2026]
![OpenShift Installation Tutorial: From Local to Production Environment Complete Guide [2026]](/images/blog/openshift/openshift-installation-hero.webp)
OpenShift Installation Tutorial: From Local to Production Environment Complete Guide
OpenShift installation is the first hurdle for many people. Looking at the official documentation's prerequisites list—DNS, load balancers, certificates—you want to give up before even starting.
But it's actually not that hard. This tutorial takes you from the simplest local environment all the way to production environment deployment. If you're not familiar with OpenShift, we recommend first reading OpenShift Complete Guide.
Installation Methods Overview
IPI vs UPI
OpenShift has two main installation methods:
| Method | Full Name | Description | Use Case |
|---|---|---|---|
| IPI | Installer-Provisioned Infrastructure | Installer automatically creates infrastructure | Cloud environments |
| UPI | User-Provisioned Infrastructure | User prepares infrastructure first | Bare metal, VMware, special requirements |
IPI Characteristics:
- Simplest, almost one-click installation
- Installer automatically creates VMs, network, load balancers
- Suitable for AWS, Azure, GCP and other major clouds
UPI Characteristics:
- Need to prepare all infrastructure first
- Complete control over every aspect
- Suitable for bare metal, VMware, environments with special network requirements
Installation Options Overview
| Option | Complexity | Purpose | Cost |
|---|---|---|---|
| OpenShift Local | Low | Local development testing | Free |
| Developer Sandbox | Low | Cloud learning environment | Free |
| IPI (Cloud) | Medium | Cloud production environment | Cloud fees + Subscription |
| UPI (Bare Metal) | High | Self-built data center | Hardware + Subscription |
| ROSA/ARO | Low | Managed services | Managed fees |
Pre-Installation Planning
Regardless of installation method, think through these first:
Cluster Size:
- Development/Test: 3 Master + 2 Worker
- Small Production: 3 Master + 3 Worker
- Medium/Large Production: 3 Master + 6+ Worker
Network Planning:
- Pod CIDR: Default 10.128.0.0/14
- Service CIDR: Default 172.30.0.0/16
- Cannot conflict with existing network
DNS Planning:
- API endpoint: api.cluster.example.com
- Ingress wildcard: *.apps.cluster.example.com
Environment Requirements
Hardware Requirements
Control Plane Nodes (per node):
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 8 vCPU |
| Memory | 16 GB | 32 GB |
| Storage | 100 GB SSD | 200 GB SSD |
Worker Nodes (per node):
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 2 vCPU | 4+ vCPU |
| Memory | 8 GB | 16+ GB |
| Storage | 100 GB | Per application needs |
Bootstrap Node (temporary during install):
- Same spec as Control Plane
- Can be deleted after installation completes
Network Requirements
Required Ports:
| Port | Purpose | Source |
|---|---|---|
| 6443 | Kubernetes API | All nodes, management |
| 22623 | Machine Config Server | Nodes |
| 443 | HTTPS Ingress | External traffic |
| 80 | HTTP Ingress | External traffic |
Inter-node Communication:
- Between Control Plane: etcd communication (2379-2380)
- All nodes: OVN-Kubernetes (6081, 9000-9999)
DNS Configuration
DNS is key to successful installation—many installation failures are stuck here.
Required DNS Records:
# API endpoint (points to load balancer or Masters)
api.cluster.example.com. A 192.168.1.10
# API internal (points to load balancer or Masters)
api-int.cluster.example.com. A 192.168.1.10
# Ingress wildcard (points to Router)
*.apps.cluster.example.com. A 192.168.1.20
# etcd (one per Master)
etcd-0.cluster.example.com. A 192.168.1.11
etcd-1.cluster.example.com. A 192.168.1.12
etcd-2.cluster.example.com. A 192.168.1.13
# etcd SRV records
_etcd-server-ssl._tcp.cluster.example.com. SRV 0 10 2380 etcd-0.cluster.example.com.
_etcd-server-ssl._tcp.cluster.example.com. SRV 0 10 2380 etcd-1.cluster.example.com.
_etcd-server-ssl._tcp.cluster.example.com. SRV 0 10 2380 etcd-2.cluster.example.com.
Load Balancer
Need two load balancers (or one multi-port):
API Load Balancer:
| Frontend | Backend | Description |
|---|---|---|
| 6443 | Master:6443 | Kubernetes API |
| 22623 | Master:22623 | Machine Config |
Ingress Load Balancer:
| Frontend | Backend | Description |
|---|---|---|
| 443 | Worker:443 | HTTPS traffic |
| 80 | Worker:80 | HTTP traffic |
OpenShift Local Installation
OpenShift Local (formerly CodeReady Containers, CRC) is the fastest way to get started with OpenShift.
System Requirements
| Resource | Requirement |
|---|---|
| CPU | 4+ cores |
| Memory | 9+ GB (16 GB recommended) |
| Storage | 35+ GB |
| OS | macOS, Windows, Linux |
Download and Install
Step 1: Get Pull Secret
- Go to console.redhat.com
- Sign in with Red Hat account (free registration)
- Download Pull Secret file
Step 2: Download OpenShift Local
Download the version for your operating system from the same page.
Step 3: Install and Configure
# macOS / Linux
tar -xvf crc-linux-amd64.tar.xz
sudo mv crc-linux-*-amd64/crc /usr/local/bin/
# Setup (downloads VM image, about 4GB)
crc setup
# Start
crc start
System will ask for Pull Secret, paste the content you just downloaded.
Access Cluster
After startup completes:
# View login credentials
crc console --credentials
# Example output:
# To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
# To login as an admin, run 'oc login -u kubeadmin -p xxxxx-xxxxx-xxxxx-xxxxx https://api.crc.testing:6443'.
# Open Web Console
crc console
Common Commands
# Check status
crc status
# Stop
crc stop
# Delete (start fresh)
crc delete
# Adjust resources
crc config set memory 16384
crc config set cpus 6
OpenShift Local Limitations
- Single node, cannot test multi-node features
- Doesn't support full OperatorHub content
- Not suitable for heavy workloads
- Each startup takes a few minutes
IPI Installation (Cloud)
IPI is the most recommended installation method for cloud environments. Using AWS as example to explain the process.
AWS Prerequisites
1. IAM Permissions
The installer needs many IAM permissions. For simplicity, you can use Administrator permissions for testing.
2. Install AWS CLI
aws configure
# Enter Access Key, Secret Key, Region
3. Download Installer
# Download from mirror.openshift.com
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz
tar -xvf openshift-install-linux.tar.gz
sudo mv openshift-install /usr/local/bin/
Create install-config.yaml
# Interactive config file creation
openshift-install create install-config --dir=my-cluster
System will ask:
- SSH Public Key (for debug node access)
- Platform (choose AWS)
- Region
- Base Domain (e.g., example.com)
- Cluster Name (e.g., my-cluster)
- Pull Secret
Generated install-config.yaml example:
apiVersion: v1
baseDomain: example.com
metadata:
name: my-cluster
platform:
aws:
region: ap-northeast-1
controlPlane:
name: master
replicas: 3
platform:
aws:
type: m5.xlarge
compute:
- name: worker
replicas: 3
platform:
aws:
type: m5.large
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
pullSecret: '{"auths":...}'
sshKey: 'ssh-rsa AAAA...'
Execute Installation
# Backup install-config.yaml (installation will delete it)
cp my-cluster/install-config.yaml my-cluster/install-config.yaml.bak
# Start installation (about 30-45 minutes)
openshift-install create cluster --dir=my-cluster --log-level=info
Installation process:
- Create VPC, Subnet, Security Group
- Create Route53 DNS records
- Create EC2 instances (Bootstrap → Master → Worker)
- Wait for Bootstrap to complete
- Wait for Operators to be ready
Verify Installation
# Set kubeconfig
export KUBECONFIG=my-cluster/auth/kubeconfig
# Verify nodes
oc get nodes
# Verify Operators
oc get clusteroperators
# Get Console URL
oc whoami --show-console
UPI Installation (Bare Metal/VMware)
UPI installation requires more preparation work but gives complete control over every aspect.
Infrastructure Preparation
1. Prepare VMs or Bare Metal
| Role | Count | Spec |
|---|---|---|
| Bootstrap | 1 | 4 CPU, 16GB RAM, 100GB |
| Master | 3 | 4 CPU, 16GB RAM, 100GB |
| Worker | 2+ | 2 CPU, 8GB RAM, 100GB |
2. Configure DNS
Set up all DNS records per the "Environment Requirements" section above.
3. Configure Load Balancer
Can use HAProxy:
# /etc/haproxy/haproxy.cfg
frontend api
bind *:6443
default_backend api-backend
backend api-backend
balance roundrobin
server bootstrap 192.168.1.10:6443 check
server master-0 192.168.1.11:6443 check
server master-1 192.168.1.12:6443 check
server master-2 192.168.1.13:6443 check
frontend machine-config
bind *:22623
default_backend machine-config-backend
backend machine-config-backend
balance roundrobin
server bootstrap 192.168.1.10:22623 check
server master-0 192.168.1.11:22623 check
server master-1 192.168.1.12:22623 check
server master-2 192.168.1.13:22623 check
Generate Ignition Config Files
Ignition is CoreOS's initialization configuration format, telling nodes what to do at boot.
# Create install-config.yaml
openshift-install create install-config --dir=my-cluster
# Generate manifests
openshift-install create manifests --dir=my-cluster
# Generate Ignition files
openshift-install create ignition-configs --dir=my-cluster
Will produce:
bootstrap.ign: For Bootstrap nodemaster.ign: For Master nodesworker.ign: For Worker nodes
Set Up Web Server
Nodes need to fetch Ignition files via HTTP:
# Simply use Python
cd my-cluster
python3 -m http.server 8080
Bootstrap Process
1. Start Bootstrap Node
Boot with RHCOS ISO, add to kernel parameters:
coreos.inst.install_dev=/dev/sda
coreos.inst.ignition_url=http://192.168.1.1:8080/bootstrap.ign
2. Start Master Nodes
Same method, but point to master.ign.
3. Wait for Bootstrap to Complete
openshift-install wait-for bootstrap-complete --dir=my-cluster --log-level=info
When you see "Bootstrap Complete," you can remove Bootstrap node from load balancer.
4. Start Worker Nodes
Boot pointing to worker.ign.
Approve CSRs
Worker nodes need certificate request approval when joining:
# View pending CSRs
oc get csr
# Approve all
oc get csr -o name | xargs oc adm certificate approve
Worker nodes send CSRs twice (node-bootstrapper, node), both need approval.
Stuck on UPI installation? Infrastructure configuration has many nuances. Book architecture consultation, let experts help solve your problems.
Post-Installation Configuration
Web Console Access
After installation completes, access via browser:
# Get Console URL
oc whoami --show-console
# Get kubeadmin password
cat my-cluster/auth/kubeadmin-password
Login with kubeadmin account.
Create Proper Admin Account
kubeadmin is a temporary account. Recommend creating a proper administrator:
# Create HTPasswd file
htpasswd -c -B -b users.htpasswd admin MySecurePassword
# Create Secret
oc create secret generic htpass-secret \
--from-file=htpasswd=users.htpasswd \
-n openshift-config
# Configure OAuth
oc apply -f - <<EOF
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: htpasswd
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret
EOF
# Give admin cluster-admin permissions
oc adm policy add-cluster-role-to-user cluster-admin admin
# Test login
oc login -u admin -p MySecurePassword
Delete kubeadmin
After confirming new admin can login, delete kubeadmin:
oc delete secrets kubeadmin -n kube-system
Node Labels
Add labels to nodes for later scheduling:
# View nodes
oc get nodes
# Add labels
oc label node worker-0 node-role.kubernetes.io/app=
oc label node worker-1 node-role.kubernetes.io/infra=
# View labels
oc get nodes --show-labels
Worker Node Scaling
Add Workers (IPI)
IPI installation can scale through MachineSet:
# View MachineSets
oc get machinesets -n openshift-machine-api
# Adjust replica count
oc scale machineset my-cluster-worker-ap-northeast-1a \
--replicas=4 \
-n openshift-machine-api
Add Workers (UPI)
UPI requires manual node addition:
- Prepare new VM/bare metal
- Boot with
worker.ign - Approve CSR
# Approve new node's CSR
oc get csr | grep Pending
oc adm certificate approve <csr-name>
Auto Scaling
Configure Cluster Autoscaler for automatic node scaling:
apiVersion: autoscaling.openshift.io/v1
kind: ClusterAutoscaler
metadata:
name: default
spec:
scaleDown:
enabled: true
delayAfterAdd: 10m
resourceLimits:
maxNodesTotal: 20
---
apiVersion: autoscaling.openshift.io/v1
kind: MachineAutoscaler
metadata:
name: worker-ap-northeast-1a
namespace: openshift-machine-api
spec:
minReplicas: 2
maxReplicas: 10
scaleTargetRef:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
name: my-cluster-worker-ap-northeast-1a
Common Installation Problems
6443 Connection Failure
Symptom: Cannot connect to API Server (port 6443)
Troubleshoot:
- Check DNS:
nslookup api.cluster.example.com - Check load balancer: Confirm 6443 is forwarding
- Check firewall: Confirm 6443 is open
# Test connection
curl -k https://api.cluster.example.com:6443/version
Bootstrap Failure
Symptom: Stuck at "Waiting for bootstrap to complete"
Troubleshoot:
# SSH into Bootstrap node
ssh core@bootstrap-ip
# View bootkube logs
journalctl -b -f -u bootkube.service
# View container status
sudo crictl ps -a
Common causes:
- DNS configuration errors (most common)
- Ignition file download failure
- Firewall blocking communication
Certificate Problems
Symptom: Nodes can't join, CSR doesn't appear
Troubleshoot:
# Check time sync
timedatectl
# Check machine-config-server
curl -k https://api-int.cluster.example.com:22623/config/worker
Ensure:
- All node times are synchronized (NTP)
- api-int DNS resolves correctly
- Port 22623 is accessible
Unhealthy Operators
Symptom: oc get clusteroperators shows Degraded
Troubleshoot:
# View specific Operator status
oc describe clusteroperator authentication
# View related Pods
oc get pods -n openshift-authentication
# View Pod logs
oc logs -n openshift-authentication deployment/oauth-openshift
FAQ
Q1: What's the difference between OpenShift Local and production version?
OpenShift Local is a single-node compact version with complete features but limited resources. Main differences: (1) Only one node, cannot test multi-node features; (2) Some Operators not supported; (3) Performance limited by local resources; (4) Not suitable for production workloads. Suitable for learning and development testing.
Q2: Should I choose IPI or UPI?
For cloud environments (AWS, Azure, GCP), prioritize IPI—much simpler. UPI suits: (1) Bare metal deployment; (2) VMware environments; (3) Special network requirements (like can't use cloud DNS); (4) Need complete infrastructure control. If no special requirements, IPI saves time and effort.
Q3: How long does installation take?
- OpenShift Local: First-time setup about 15-20 minutes
- IPI (Cloud): About 30-45 minutes
- UPI (Bare Metal): 2-4 hours (including infrastructure preparation)
Actual time depends on network speed, hardware performance, whether successful on first try.
Q4: Do I have to configure DNS myself?
IPI in cloud environments automatically creates DNS (using Route53, Azure DNS, etc.). UPI must configure DNS yourself—this is the most error-prone step. Recommend using dedicated DNS server (like BIND) or cloud DNS service, not /etc/hosts.
Q5: Can I install on a single node?
Yes. OpenShift supports Single Node OpenShift (SNO), suitable for edge environments or resource-limited scenarios. But SNO has no high availability, not recommended for production core systems.
OpenShift Installation is Just the First Step
Subsequent architecture design is more important. Network planning, storage configuration, security settings—every decision affects subsequent stability and scalability.
Book architecture consultation, ensure your container platform starts in the right direction.
Reference Resources
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
OpenShift Advanced Features: ACM, ACS, LDAP, Authentication Configuration Complete Guide [2026]
In-depth introduction to OpenShift advanced feature configuration, covering ACM multi-cluster management, ACS advanced security, LDAP/AD authentication, RBAC permission design, Auto Scaling, and Service Mesh.
OpenShiftOpenShift AI: Complete Enterprise AI/ML Platform Guide [2026]
In-depth analysis of OpenShift AI platform, covering AI/ML workflows, OpenShift Lightspeed, GPU support, Jupyter integration, MLOps practices, and deployment configuration.
OpenShiftOpenShift Architecture Deep Dive: Control Plane, Operator and Network Design [2026]
In-depth analysis of OpenShift architecture design, covering Control Plane components, Worker Nodes, Operator mechanism, OVN-Kubernetes networking, storage architecture, security design and high availability configuration.