OpenShift Advanced Features: ACM, ACS, LDAP, Authentication Configuration Complete Guide [2026]
![OpenShift Advanced Features: ACM, ACS, LDAP, Authentication Configuration Complete Guide [2026]](/images/blog/openshift/openshift-advanced-features-hero.webp)
OpenShift Advanced Features: ACM, ACS, LDAP, Authentication Configuration Complete Guide
You've built your basic OpenShift cluster—what's next?
Enterprise environments need more features: multi-cluster management, advanced security, authentication integration, auto-scaling. These advanced features are key to elevating OpenShift from "functional" to "excellent."
This article covers the most commonly used enterprise advanced feature configurations to help you build a truly enterprise-grade container platform. If you're not familiar with OpenShift basics, we recommend first reading the OpenShift Complete Guide.
Multi-Cluster Management (ACM)
What is ACM
Advanced Cluster Management (ACM) is Red Hat's multi-cluster management solution. When you have 2 or more OpenShift clusters, ACM helps you manage them centrally.
Problems ACM Solves:
- Multiple clusters managed separately with inconsistent configurations
- Deploying applications across multiple clusters is cumbersome
- Cannot centrally monitor all cluster status
- Security policies are difficult to enforce across clusters
Hub and Managed Cluster Architecture
ACM uses a Hub-Spoke architecture:
┌──────────────────┐
│ Hub Cluster │
│ (ACM installed) │
└────────┬─────────┘
│
┌───────────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Managed Cluster │ │ Managed Cluster │ │ Managed Cluster │
│ (Production) │ │ (Testing) │ │ (Development) │
└────────────────┘ └────────────────┘ └────────────────┘
Hub Cluster:
- Installs ACM Operator
- Manages all Managed Clusters
- Executes policies and application deployments
Managed Cluster:
- Clusters managed by Hub
- Can be OpenShift or Kubernetes
- Automatically installs klusterlet agent
Installing ACM
Step 1: Install ACM Operator
# Create Namespace
oc create namespace open-cluster-management
# Install ACM from OperatorHub
# Or use CLI
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: advanced-cluster-management
namespace: open-cluster-management
spec:
channel: release-2.10
installPlanApproval: Automatic
name: advanced-cluster-management
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Step 2: Create MultiClusterHub
apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
name: multiclusterhub
namespace: open-cluster-management
spec: {}
Importing Clusters
Import from Web Console:
- Go to ACM Console (Clusters → Infrastructure → Clusters)
- Click Import cluster
- Enter cluster name and labels
- Copy the generated YAML
- Execute on target cluster
Command Line Import:
# Create ManagedCluster on Hub cluster
cat <<EOF | oc apply -f -
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
name: production-cluster
labels:
cloud: AWS
env: production
spec:
hubAcceptsClient: true
EOF
Policy Management
ACM's power lies in Policy-based Governance:
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-namespace
namespace: open-cluster-management
spec:
remediationAction: enforce # inform or enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: policy-namespace-prod
spec:
remediationAction: enforce
severity: high
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
name: production
PlacementRule determines policy application scope:
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: placement-production
namespace: open-cluster-management
spec:
clusterSelector:
matchLabels:
env: production
Application Deployment
Deploy applications across clusters:
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
name: my-app
namespace: open-cluster-management
spec:
componentKinds:
- group: apps.open-cluster-management.io
kind: Subscription
selector:
matchLabels:
app: my-app
---
apiVersion: apps.open-cluster-management.io/v1
kind: Subscription
metadata:
name: my-app-subscription
namespace: open-cluster-management
labels:
app: my-app
spec:
channel: my-channel/my-app-channel
placement:
placementRef:
name: placement-all-clusters
kind: PlacementRule
Advanced Security (ACS)
What is ACS
Advanced Cluster Security (ACS) is Red Hat's Kubernetes-native security platform. It provides complete security protection from build to runtime.
ACS Core Features:
| Feature | Description |
|---|---|
| Vulnerability Management | Scans Container Images for CVEs |
| Configuration Management | Checks K8s configurations against best practices |
| Runtime Protection | Detects anomalous behavior and threats |
| Network Segmentation | Visualizes network traffic, detects abnormal connections |
| Compliance Checking | CIS, NIST, PCI-DSS and other standards |
Installing ACS
Step 1: Install ACS Operator
# Create Namespace
oc create namespace stackrox
# Install Operator
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: rhacs-operator
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Automatic
name: rhacs-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Step 2: Deploy Central
apiVersion: platform.stackrox.io/v1alpha1
kind: Central
metadata:
name: stackrox-central-services
namespace: stackrox
spec:
central:
exposure:
loadBalancer:
enabled: false
nodePort:
enabled: false
route:
enabled: true
persistence:
persistentVolumeClaim:
claimName: stackrox-db
egress:
connectivityPolicy: Online
scanner:
analyzer:
scaling:
autoScaling: Enabled
maxReplicas: 5
minReplicas: 2
Step 3: Deploy SecuredCluster
Install on each cluster to be protected:
apiVersion: platform.stackrox.io/v1alpha1
kind: SecuredCluster
metadata:
name: stackrox-secured-cluster-services
namespace: stackrox
spec:
clusterName: production-cluster
centralEndpoint: central.stackrox.svc:443
admissionControl:
listenOnUpdates: true
bypass: BreakGlassAnnotation
contactImageScanners: ScanIfMissing
timeoutSeconds: 20
Vulnerability Scanning
ACS automatically scans all deployed Images:
# View vulnerability report
roxctl --insecure-skip-tls-verify \
-e "central.stackrox.svc:443" \
image scan --image nginx:latest
# Check if Image complies with policies
roxctl --insecure-skip-tls-verify \
-e "central.stackrox.svc:443" \
image check --image nginx:latest
Integrate with CI/CD Pipeline:
# Tekton Task example
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: acs-image-check
spec:
params:
- name: image
type: string
steps:
- name: image-check
image: registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8
script: |
roxctl image check \
--insecure-skip-tls-verify \
-e "$ACS_CENTRAL_ENDPOINT" \
--image $(params.image)
Runtime Protection
Detects anomalous behavior inside containers:
Common Detection Rules:
- Shell execution inside container
- Modifying /etc/passwd
- Reading /etc/shadow
- Downloading and executing files
- Lateral movement attempts
Custom Policies:
# Configure in ACS Console
# System Policies → Create Policy
Name: Block Crypto Mining
Category: Anomalous Activity
Severity: Critical
Enforcement: Enforce
Conditions:
- Process Name contains "xmrig"
- OR Process Name contains "minerd"
Response: Scale to Zero
Compliance Checking
ACS has built-in multiple compliance standards:
| Standard | Use Case |
|---|---|
| CIS Kubernetes Benchmark | General security baseline |
| NIST 800-190 | US government container guidelines |
| PCI DSS | Financial industry |
| HIPAA | Healthcare industry |
ACS security configuration requires considering false positive rates and performance impact. Book an architecture consultation and let us help you design your security strategy.
Authentication Configuration
OAuth Configuration
OpenShift uses OAuth Server to handle authentication. Configuration is in the OAuth CR:
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my-idp
type: LDAP # or HTPasswd, OIDC, GitHub, etc.
mappingMethod: claim
# ... IDP specific configuration
Supported Identity Providers:
| Type | Description | Use Case |
|---|---|---|
| HTPasswd | File-based passwords | Test environments, small scale |
| LDAP | Directory service | Enterprise AD/LDAP |
| OIDC | OpenID Connect | Okta, Azure AD, Keycloak |
| GitHub | GitHub OAuth | Developer environments |
| GitLab | GitLab OAuth | GitLab integrated environments |
LDAP Integration
The most common enterprise integration method:
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: corporate-ldap
type: LDAP
mappingMethod: claim
ldap:
attributes:
id:
- dn
email:
- mail
name:
- cn
preferredUsername:
- uid
bindDN: "cn=openshift,ou=service,dc=company,dc=com"
bindPassword:
name: ldap-secret # Secret name
ca:
name: ldap-ca # CA certificate ConfigMap
insecure: false
url: "ldaps://ldap.company.com:636/ou=users,dc=company,dc=com?uid"
Create LDAP Secret:
oc create secret generic ldap-secret \
--from-literal=bindPassword='your-password' \
-n openshift-config
Create CA ConfigMap:
oc create configmap ldap-ca \
--from-file=ca.crt=/path/to/ca.crt \
-n openshift-config
Active Directory Integration
AD integration is a special case of LDAP:
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: active-directory
type: LDAP
mappingMethod: claim
ldap:
attributes:
id:
- sAMAccountName
email:
- mail
name:
- displayName
preferredUsername:
- sAMAccountName
bindDN: "CN=OpenShift Service,OU=Service Accounts,DC=corp,DC=company,DC=com"
bindPassword:
name: ad-bind-password
ca:
name: ad-ca
insecure: false
url: "ldaps://ad.corp.company.com:636/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=user)"
OIDC Configuration
Integrate with Keycloak, Okta, Azure AD:
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: keycloak
type: OpenID
mappingMethod: claim
openID:
clientID: openshift
clientSecret:
name: keycloak-client-secret
claims:
preferredUsername:
- preferred_username
name:
- name
email:
- email
groups:
- groups
issuer: https://keycloak.company.com/realms/openshift
ca:
name: keycloak-ca
HTPasswd Configuration
Suitable for test environments or small-scale deployments:
# Create htpasswd file
htpasswd -c -B -b users.htpasswd admin password123
htpasswd -B -b users.htpasswd developer dev123
# Create Secret
oc create secret generic htpass-secret \
--from-file=htpasswd=users.htpasswd \
-n openshift-config
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: htpasswd
type: HTPasswd
mappingMethod: claim
htpasswd:
fileData:
name: htpass-secret
Advanced RBAC Configuration
ClusterRole Design
Built-in ClusterRoles:
| Role | Permissions |
|---|---|
| cluster-admin | Full cluster permissions |
| admin | Full permissions within project |
| edit | Read/write within project (no RBAC) |
| view | Read-only within project |
Custom ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-admin
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["resourcequotas", "limitranges"]
verbs: ["*"]
- apiGroups: ["project.openshift.io"]
resources: ["projects"]
verbs: ["get", "list", "watch"]
Project Permission Management
Assign Project Admin:
# Make john admin of my-project
oc adm policy add-role-to-user admin john -n my-project
# Give dev-team group edit permissions
oc adm policy add-role-to-group edit dev-team -n my-project
RoleBinding Example:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admin-binding
namespace: my-project
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: dev-team
Group Sync
Automatically sync LDAP groups:
# ldap-sync-config.yaml
kind: LDAPSyncConfig
apiVersion: v1
url: ldaps://ldap.company.com:636
bindDN: cn=openshift,ou=service,dc=company,dc=com
bindPassword:
file: /etc/secrets/bindPassword
insecure: false
ca: /etc/ldap-ca/ca.crt
augmentedActiveDirectory:
groupsQuery:
baseDN: "ou=groups,dc=company,dc=com"
scope: sub
derefAliases: never
filter: (objectClass=group)
groupUIDAttribute: dn
groupNameAttributes:
- cn
usersQuery:
baseDN: "ou=users,dc=company,dc=com"
scope: sub
derefAliases: never
userNameAttributes:
- sAMAccountName
groupMembershipAttributes:
- memberOf
Execute Group Sync:
# Preview sync results
oc adm groups sync --sync-config=ldap-sync-config.yaml
# Actually execute sync
oc adm groups sync --sync-config=ldap-sync-config.yaml --confirm
# Set up periodic sync (CronJob)
Principle of Least Privilege
Service Account Permission Restrictions:
# ServiceAccount that can only read ConfigMaps
apiVersion: v1
kind: ServiceAccount
metadata:
name: config-reader
namespace: my-app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: configmap-reader
namespace: my-app
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: config-reader-binding
namespace: my-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: configmap-reader
subjects:
- kind: ServiceAccount
name: config-reader
namespace: my-app
Auto Scaling Configuration
Horizontal Pod Autoscaler (HPA)
Automatically adjust Pod count based on CPU/Memory usage:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
namespace: my-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300 # 5 minute stabilization period
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
Vertical Pod Autoscaler (VPA)
Automatically adjust Pod CPU/Memory requests:
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
namespace: my-app
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
updatePolicy:
updateMode: Auto # Off, Initial, Recreate, Auto
resourcePolicy:
containerPolicies:
- containerName: '*'
minAllowed:
cpu: 100m
memory: 128Mi
maxAllowed:
cpu: 4
memory: 8Gi
controlledResources:
- cpu
- memory
VPA Modes:
| Mode | Description |
|---|---|
| Off | Only provides recommendations, no auto-adjustment |
| Initial | Only sets at Pod creation, won't update existing Pods |
| Recreate | Deletes and recreates Pods to apply new settings |
| Auto | Automatically chooses best approach |
Cluster Autoscaler
Automatically scale node count:
apiVersion: "autoscaling.openshift.io/v1"
kind: "ClusterAutoscaler"
metadata:
name: "default"
spec:
podPriorityThreshold: -10
resourceLimits:
maxNodesTotal: 100
cores:
min: 8
max: 400
memory:
min: 16
max: 1600
logVerbosity: 4
scaleDown:
enabled: true
delayAfterAdd: 10m
delayAfterDelete: 5m
delayAfterFailure: 3m
unneededTime: 5m
utilizationThreshold: "0.4"
Machine Autoscaler
Configure with MachineSet:
apiVersion: "autoscaling.openshift.io/v1beta1"
kind: "MachineAutoscaler"
metadata:
name: "worker-us-east-1a"
namespace: "openshift-machine-api"
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
name: cluster-name-worker-us-east-1a
Complete Auto Scaling Architecture:
┌─────────────────────────┐
│ Cluster Autoscaler │
│ (Manages overall node │
│ count limits) │
└───────────┬─────────────┘
│
┌──────────────────────┼──────────────────────┐
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Machine │ │ Machine │ │ Machine │
│ Autoscaler │ │ Autoscaler │ │ Autoscaler │
│ (Zone A) │ │ (Zone B) │ │ (Zone C) │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ MachineSet │ │ MachineSet │ │ MachineSet │
│ (worker-a) │ │ (worker-b) │ │ (worker-c) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
API Gateway (3scale)
What is 3scale
Red Hat 3scale API Management is an enterprise-grade API Gateway. It provides API management, traffic control, and developer portal.
3scale Core Components:
| Component | Function |
|---|---|
| APIcast | API Gateway, handles traffic |
| Backend | Authorization and rate limiting logic |
| System | Management interface and configuration |
| Developer Portal | Developer self-service portal |
Installing 3scale
# Install 3scale Operator
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: 3scale-operator
namespace: openshift-operators
spec:
channel: threescale-2.14
installPlanApproval: Automatic
name: 3scale-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Deploy APIManager:
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
name: apimanager
namespace: 3scale
spec:
wildcardDomain: apps.cluster.example.com
resourceRequirementsEnabled: true
system:
fileStorage:
persistentVolumeClaim:
storageClassName: gp2
API Management
Create API Product:
- Go to 3scale Admin Portal
- Products → Create Product
- Configure Backend pointing to actual service
- Configure Application Plans (pricing plans)
- Publish API
Traffic Control Settings:
# Application Plan settings
- Rate limit: 1000 requests/hour
- Quota: 10000 requests/month
- Pricing: $0.01/request (over quota)
Developer Portal
3scale provides a self-service Developer Portal:
- API documentation (auto-generated)
- API Key application
- Usage dashboard
- Billing information
Service Mesh
OpenShift Service Mesh
OpenShift Service Mesh is based on Istio and provides service-to-service communication management:
Core Features:
- Traffic Management: Routing, load balancing, canary deployment
- Security: mTLS, authorization policies
- Observability: Distributed tracing, metrics collection
Installing Service Mesh
Step 1: Install Dependency Operators
# Install in order
# 1. OpenShift Elasticsearch Operator
# 2. Red Hat OpenShift distributed tracing platform (Jaeger)
# 3. Kiali Operator
# 4. Red Hat OpenShift Service Mesh Operator
Step 2: Create ServiceMeshControlPlane
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
namespace: istio-system
spec:
version: v2.5
tracing:
type: Jaeger
sampling: 10000 # 100%
addons:
kiali:
enabled: true
grafana:
enabled: true
jaeger:
install:
storage:
type: Memory
Step 3: Create ServiceMeshMemberRoll
apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
name: default
namespace: istio-system
spec:
members:
- my-app-namespace
- another-namespace
Traffic Management
VirtualService Routing Configuration:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: reviews-route
namespace: my-app
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
weight: 90
- destination:
host: reviews
subset: v2
weight: 10
DestinationRule Load Balancing:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews-destination
namespace: my-app
spec:
host: reviews
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 100
http2MaxRequests: 1000
loadBalancer:
simple: ROUND_ROBIN
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Observability
Kiali: Service topology visualization Jaeger: Distributed tracing Grafana: Metrics dashboards
# Get Kiali URL
oc get route kiali -n istio-system
# Get Jaeger URL
oc get route jaeger -n istio-system
FAQ
Q1: Can ACM and ACS be used together?
Yes, and it's recommended. ACM handles multi-cluster management, ACS handles security. ACM can uniformly deploy ACS SecuredCluster to all Managed Clusters, forming unified security monitoring. Install ACS Central on the Hub Cluster, then deploy SecuredCluster to all clusters via ACM Policy.
Q2: How to troubleshoot LDAP integration failures?
Common issues: (1) Cannot connect to LDAP Server, check firewall; (2) Incorrect bindDN or password; (3) CA certificate issues (LDAPS); (4) Search filter syntax errors. Troubleshooting command: oc logs -n openshift-authentication deployment/oauth-openshift. You can also use ldapsearch to test connectivity.
Q3: Can HPA and VPA be used simultaneously?
Yes, but be careful. We recommend VPA only manages memory while HPA manages CPU. Or use VPA in Off mode to only provide recommendations while HPA handles actual adjustments. Using both on the same resource (like CPU) may cause conflicts.
Q4: Does Service Mesh affect performance?
There's some impact. The delay added by Sidecar Proxy (Envoy) is typically 1-3ms. Acceptable for most applications, but evaluate for ultra-low latency requirements. You can adjust performance with the istio-proxy concurrency setting. You can also selectively exclude critical paths from the mesh.
Q5: How to choose between 3scale and Service Mesh?
Different positioning: 3scale is API Management, handling north-south traffic (external to internal), providing API billing, developer portal, etc. Service Mesh handles east-west traffic (between services), providing mTLS, observability. Many enterprises use both: 3scale at the edge, Service Mesh internally.
Advanced Features are Key to Enterprise Deployment
Multi-cluster management and advanced security let OpenShift deliver maximum value, but configuration complexity is also high.
Book an architecture consultation and let us help you plan a complete enterprise architecture.
Reference Resources
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
OpenShift AI: Complete Enterprise AI/ML Platform Guide [2026]
In-depth analysis of OpenShift AI platform, covering AI/ML workflows, OpenShift Lightspeed, GPU support, Jupyter integration, MLOps practices, and deployment configuration.
OpenShiftOpenShift Architecture Deep Dive: Control Plane, Operator and Network Design [2026]
In-depth analysis of OpenShift architecture design, covering Control Plane components, Worker Nodes, Operator mechanism, OVN-Kubernetes networking, storage architecture, security design and high availability configuration.
OpenShiftWhat is OpenShift? Red Hat Container Platform Complete Guide [2026]
In-depth analysis of OpenShift container platform, covering core architecture, differences from Kubernetes, Virtualization, AI features, version lifecycle, installation deployment to pricing and licensing, helping enterprises evaluate OpenShift adoption.