Back to HomeOpenShift

OpenShift Installation Tutorial: From Local to Production Environment Complete Guide [2026]

12 min min read
#OpenShift#Installation Tutorial#IPI#UPI#OpenShift Local

OpenShift Installation Tutorial: From Local to Production Environment Complete Guide [2026]

OpenShift Installation Tutorial: From Local to Production Environment Complete Guide

OpenShift installation is the first hurdle for many people. Looking at the official documentation's prerequisites list—DNS, load balancers, certificates—you want to give up before even starting.

But it's actually not that hard. This tutorial takes you from the simplest local environment all the way to production environment deployment. If you're not familiar with OpenShift, we recommend first reading OpenShift Complete Guide.


Installation Methods Overview

IPI vs UPI

OpenShift has two main installation methods:

MethodFull NameDescriptionUse Case
IPIInstaller-Provisioned InfrastructureInstaller automatically creates infrastructureCloud environments
UPIUser-Provisioned InfrastructureUser prepares infrastructure firstBare metal, VMware, special requirements

IPI Characteristics:

  • Simplest, almost one-click installation
  • Installer automatically creates VMs, network, load balancers
  • Suitable for AWS, Azure, GCP and other major clouds

UPI Characteristics:

  • Need to prepare all infrastructure first
  • Complete control over every aspect
  • Suitable for bare metal, VMware, environments with special network requirements

Installation Options Overview

OptionComplexityPurposeCost
OpenShift LocalLowLocal development testingFree
Developer SandboxLowCloud learning environmentFree
IPI (Cloud)MediumCloud production environmentCloud fees + Subscription
UPI (Bare Metal)HighSelf-built data centerHardware + Subscription
ROSA/AROLowManaged servicesManaged fees

Pre-Installation Planning

Regardless of installation method, think through these first:

Cluster Size:

  • Development/Test: 3 Master + 2 Worker
  • Small Production: 3 Master + 3 Worker
  • Medium/Large Production: 3 Master + 6+ Worker

Network Planning:

  • Pod CIDR: Default 10.128.0.0/14
  • Service CIDR: Default 172.30.0.0/16
  • Cannot conflict with existing network

DNS Planning:

  • API endpoint: api.cluster.example.com
  • Ingress wildcard: *.apps.cluster.example.com

Environment Requirements

Hardware Requirements

Control Plane Nodes (per node):

ResourceMinimumRecommended
CPU4 vCPU8 vCPU
Memory16 GB32 GB
Storage100 GB SSD200 GB SSD

Worker Nodes (per node):

ResourceMinimumRecommended
CPU2 vCPU4+ vCPU
Memory8 GB16+ GB
Storage100 GBPer application needs

Bootstrap Node (temporary during install):

  • Same spec as Control Plane
  • Can be deleted after installation completes

Network Requirements

Required Ports:

PortPurposeSource
6443Kubernetes APIAll nodes, management
22623Machine Config ServerNodes
443HTTPS IngressExternal traffic
80HTTP IngressExternal traffic

Inter-node Communication:

  • Between Control Plane: etcd communication (2379-2380)
  • All nodes: OVN-Kubernetes (6081, 9000-9999)

DNS Configuration

DNS is key to successful installation—many installation failures are stuck here.

Required DNS Records:

# API endpoint (points to load balancer or Masters)
api.cluster.example.com.      A    192.168.1.10

# API internal (points to load balancer or Masters)
api-int.cluster.example.com.  A    192.168.1.10

# Ingress wildcard (points to Router)
*.apps.cluster.example.com.   A    192.168.1.20

# etcd (one per Master)
etcd-0.cluster.example.com.   A    192.168.1.11
etcd-1.cluster.example.com.   A    192.168.1.12
etcd-2.cluster.example.com.   A    192.168.1.13

# etcd SRV records
_etcd-server-ssl._tcp.cluster.example.com. SRV 0 10 2380 etcd-0.cluster.example.com.
_etcd-server-ssl._tcp.cluster.example.com. SRV 0 10 2380 etcd-1.cluster.example.com.
_etcd-server-ssl._tcp.cluster.example.com. SRV 0 10 2380 etcd-2.cluster.example.com.

Load Balancer

Need two load balancers (or one multi-port):

API Load Balancer:

FrontendBackendDescription
6443Master:6443Kubernetes API
22623Master:22623Machine Config

Ingress Load Balancer:

FrontendBackendDescription
443Worker:443HTTPS traffic
80Worker:80HTTP traffic

OpenShift Local Installation

OpenShift Local (formerly CodeReady Containers, CRC) is the fastest way to get started with OpenShift.

System Requirements

ResourceRequirement
CPU4+ cores
Memory9+ GB (16 GB recommended)
Storage35+ GB
OSmacOS, Windows, Linux

Download and Install

Step 1: Get Pull Secret

  1. Go to console.redhat.com
  2. Sign in with Red Hat account (free registration)
  3. Download Pull Secret file

Step 2: Download OpenShift Local

Download the version for your operating system from the same page.

Step 3: Install and Configure

# macOS / Linux
tar -xvf crc-linux-amd64.tar.xz
sudo mv crc-linux-*-amd64/crc /usr/local/bin/

# Setup (downloads VM image, about 4GB)
crc setup

# Start
crc start

System will ask for Pull Secret, paste the content you just downloaded.

Access Cluster

After startup completes:

# View login credentials
crc console --credentials

# Example output:
# To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
# To login as an admin, run 'oc login -u kubeadmin -p xxxxx-xxxxx-xxxxx-xxxxx https://api.crc.testing:6443'.

# Open Web Console
crc console

Common Commands

# Check status
crc status

# Stop
crc stop

# Delete (start fresh)
crc delete

# Adjust resources
crc config set memory 16384
crc config set cpus 6

OpenShift Local Limitations

  • Single node, cannot test multi-node features
  • Doesn't support full OperatorHub content
  • Not suitable for heavy workloads
  • Each startup takes a few minutes

IPI Installation (Cloud)

IPI is the most recommended installation method for cloud environments. Using AWS as example to explain the process.

AWS Prerequisites

1. IAM Permissions

The installer needs many IAM permissions. For simplicity, you can use Administrator permissions for testing.

2. Install AWS CLI

aws configure
# Enter Access Key, Secret Key, Region

3. Download Installer

# Download from mirror.openshift.com
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz
tar -xvf openshift-install-linux.tar.gz
sudo mv openshift-install /usr/local/bin/

Create install-config.yaml

# Interactive config file creation
openshift-install create install-config --dir=my-cluster

System will ask:

  • SSH Public Key (for debug node access)
  • Platform (choose AWS)
  • Region
  • Base Domain (e.g., example.com)
  • Cluster Name (e.g., my-cluster)
  • Pull Secret

Generated install-config.yaml example:

apiVersion: v1
baseDomain: example.com
metadata:
  name: my-cluster
platform:
  aws:
    region: ap-northeast-1
controlPlane:
  name: master
  replicas: 3
  platform:
    aws:
      type: m5.xlarge
compute:
- name: worker
  replicas: 3
  platform:
    aws:
      type: m5.large
networking:
  networkType: OVNKubernetes
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  serviceNetwork:
  - 172.30.0.0/16
pullSecret: '{"auths":...}'
sshKey: 'ssh-rsa AAAA...'

Execute Installation

# Backup install-config.yaml (installation will delete it)
cp my-cluster/install-config.yaml my-cluster/install-config.yaml.bak

# Start installation (about 30-45 minutes)
openshift-install create cluster --dir=my-cluster --log-level=info

Installation process:

  1. Create VPC, Subnet, Security Group
  2. Create Route53 DNS records
  3. Create EC2 instances (Bootstrap → Master → Worker)
  4. Wait for Bootstrap to complete
  5. Wait for Operators to be ready

Verify Installation

# Set kubeconfig
export KUBECONFIG=my-cluster/auth/kubeconfig

# Verify nodes
oc get nodes

# Verify Operators
oc get clusteroperators

# Get Console URL
oc whoami --show-console

UPI Installation (Bare Metal/VMware)

UPI installation requires more preparation work but gives complete control over every aspect.

Infrastructure Preparation

1. Prepare VMs or Bare Metal

RoleCountSpec
Bootstrap14 CPU, 16GB RAM, 100GB
Master34 CPU, 16GB RAM, 100GB
Worker2+2 CPU, 8GB RAM, 100GB

2. Configure DNS

Set up all DNS records per the "Environment Requirements" section above.

3. Configure Load Balancer

Can use HAProxy:

# /etc/haproxy/haproxy.cfg
frontend api
    bind *:6443
    default_backend api-backend

backend api-backend
    balance roundrobin
    server bootstrap 192.168.1.10:6443 check
    server master-0 192.168.1.11:6443 check
    server master-1 192.168.1.12:6443 check
    server master-2 192.168.1.13:6443 check

frontend machine-config
    bind *:22623
    default_backend machine-config-backend

backend machine-config-backend
    balance roundrobin
    server bootstrap 192.168.1.10:22623 check
    server master-0 192.168.1.11:22623 check
    server master-1 192.168.1.12:22623 check
    server master-2 192.168.1.13:22623 check

Generate Ignition Config Files

Ignition is CoreOS's initialization configuration format, telling nodes what to do at boot.

# Create install-config.yaml
openshift-install create install-config --dir=my-cluster

# Generate manifests
openshift-install create manifests --dir=my-cluster

# Generate Ignition files
openshift-install create ignition-configs --dir=my-cluster

Will produce:

  • bootstrap.ign: For Bootstrap node
  • master.ign: For Master nodes
  • worker.ign: For Worker nodes

Set Up Web Server

Nodes need to fetch Ignition files via HTTP:

# Simply use Python
cd my-cluster
python3 -m http.server 8080

Bootstrap Process

1. Start Bootstrap Node

Boot with RHCOS ISO, add to kernel parameters:

coreos.inst.install_dev=/dev/sda
coreos.inst.ignition_url=http://192.168.1.1:8080/bootstrap.ign

2. Start Master Nodes

Same method, but point to master.ign.

3. Wait for Bootstrap to Complete

openshift-install wait-for bootstrap-complete --dir=my-cluster --log-level=info

When you see "Bootstrap Complete," you can remove Bootstrap node from load balancer.

4. Start Worker Nodes

Boot pointing to worker.ign.

Approve CSRs

Worker nodes need certificate request approval when joining:

# View pending CSRs
oc get csr

# Approve all
oc get csr -o name | xargs oc adm certificate approve

Worker nodes send CSRs twice (node-bootstrapper, node), both need approval.

Stuck on UPI installation? Infrastructure configuration has many nuances. Book architecture consultation, let experts help solve your problems.


Post-Installation Configuration

Web Console Access

After installation completes, access via browser:

# Get Console URL
oc whoami --show-console

# Get kubeadmin password
cat my-cluster/auth/kubeadmin-password

Login with kubeadmin account.

Create Proper Admin Account

kubeadmin is a temporary account. Recommend creating a proper administrator:

# Create HTPasswd file
htpasswd -c -B -b users.htpasswd admin MySecurePassword

# Create Secret
oc create secret generic htpass-secret \
  --from-file=htpasswd=users.htpasswd \
  -n openshift-config

# Configure OAuth
oc apply -f - <<EOF
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: htpasswd
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret
EOF

# Give admin cluster-admin permissions
oc adm policy add-cluster-role-to-user cluster-admin admin

# Test login
oc login -u admin -p MySecurePassword

Delete kubeadmin

After confirming new admin can login, delete kubeadmin:

oc delete secrets kubeadmin -n kube-system

Node Labels

Add labels to nodes for later scheduling:

# View nodes
oc get nodes

# Add labels
oc label node worker-0 node-role.kubernetes.io/app=
oc label node worker-1 node-role.kubernetes.io/infra=

# View labels
oc get nodes --show-labels

Worker Node Scaling

Add Workers (IPI)

IPI installation can scale through MachineSet:

# View MachineSets
oc get machinesets -n openshift-machine-api

# Adjust replica count
oc scale machineset my-cluster-worker-ap-northeast-1a \
  --replicas=4 \
  -n openshift-machine-api

Add Workers (UPI)

UPI requires manual node addition:

  1. Prepare new VM/bare metal
  2. Boot with worker.ign
  3. Approve CSR
# Approve new node's CSR
oc get csr | grep Pending
oc adm certificate approve <csr-name>

Auto Scaling

Configure Cluster Autoscaler for automatic node scaling:

apiVersion: autoscaling.openshift.io/v1
kind: ClusterAutoscaler
metadata:
  name: default
spec:
  scaleDown:
    enabled: true
    delayAfterAdd: 10m
  resourceLimits:
    maxNodesTotal: 20
---
apiVersion: autoscaling.openshift.io/v1
kind: MachineAutoscaler
metadata:
  name: worker-ap-northeast-1a
  namespace: openshift-machine-api
spec:
  minReplicas: 2
  maxReplicas: 10
  scaleTargetRef:
    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    name: my-cluster-worker-ap-northeast-1a

Common Installation Problems

6443 Connection Failure

Symptom: Cannot connect to API Server (port 6443)

Troubleshoot:

  1. Check DNS: nslookup api.cluster.example.com
  2. Check load balancer: Confirm 6443 is forwarding
  3. Check firewall: Confirm 6443 is open
# Test connection
curl -k https://api.cluster.example.com:6443/version

Bootstrap Failure

Symptom: Stuck at "Waiting for bootstrap to complete"

Troubleshoot:

# SSH into Bootstrap node
ssh core@bootstrap-ip

# View bootkube logs
journalctl -b -f -u bootkube.service

# View container status
sudo crictl ps -a

Common causes:

  • DNS configuration errors (most common)
  • Ignition file download failure
  • Firewall blocking communication

Certificate Problems

Symptom: Nodes can't join, CSR doesn't appear

Troubleshoot:

# Check time sync
timedatectl

# Check machine-config-server
curl -k https://api-int.cluster.example.com:22623/config/worker

Ensure:

  • All node times are synchronized (NTP)
  • api-int DNS resolves correctly
  • Port 22623 is accessible

Unhealthy Operators

Symptom: oc get clusteroperators shows Degraded

Troubleshoot:

# View specific Operator status
oc describe clusteroperator authentication

# View related Pods
oc get pods -n openshift-authentication

# View Pod logs
oc logs -n openshift-authentication deployment/oauth-openshift

FAQ

Q1: What's the difference between OpenShift Local and production version?

OpenShift Local is a single-node compact version with complete features but limited resources. Main differences: (1) Only one node, cannot test multi-node features; (2) Some Operators not supported; (3) Performance limited by local resources; (4) Not suitable for production workloads. Suitable for learning and development testing.

Q2: Should I choose IPI or UPI?

For cloud environments (AWS, Azure, GCP), prioritize IPI—much simpler. UPI suits: (1) Bare metal deployment; (2) VMware environments; (3) Special network requirements (like can't use cloud DNS); (4) Need complete infrastructure control. If no special requirements, IPI saves time and effort.

Q3: How long does installation take?

  • OpenShift Local: First-time setup about 15-20 minutes
  • IPI (Cloud): About 30-45 minutes
  • UPI (Bare Metal): 2-4 hours (including infrastructure preparation)

Actual time depends on network speed, hardware performance, whether successful on first try.

Q4: Do I have to configure DNS myself?

IPI in cloud environments automatically creates DNS (using Route53, Azure DNS, etc.). UPI must configure DNS yourself—this is the most error-prone step. Recommend using dedicated DNS server (like BIND) or cloud DNS service, not /etc/hosts.

Q5: Can I install on a single node?

Yes. OpenShift supports Single Node OpenShift (SNO), suitable for edge environments or resource-limited scenarios. But SNO has no high availability, not recommended for production core systems.


OpenShift Installation is Just the First Step

Subsequent architecture design is more important. Network planning, storage configuration, security settings—every decision affects subsequent stability and scalability.

Book architecture consultation, ensure your container platform starts in the right direction.


Reference Resources

Need Professional Cloud Advice?

Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help

Book Free Consultation

Related Articles