OpenShift installation
OpenShift-specific deployment guidance for self-hosted DALP installations. Covers SCC requirements, Route configuration, and OpenShift Data Foundation integration for enterprise OpenShift environments.
Overview
DALP fully supports deployment on Red Hat OpenShift Container Platform (OCP) and OKD (the community distribution of Kubernetes that powers OpenShift). This guide covers OpenShift-specific configuration and considerations beyond the standard Kubernetes deployment.
All DALP components comply with the restricted-v2 Security Context Constraints (SCC), the strictest default policy
in OpenShift. No elevated privileges or custom SCCs are required.
Platform requirements
| Requirement | Minimum | Recommended | Notes |
|---|---|---|---|
| OpenShift version | 4.14 | 4.16+ | OCP or OKD supported |
| Worker nodes | 3 | 6+ | Spread across failure domains |
| vCPU per worker | 8 | 16 | More for indexing workloads |
| Memory per worker | 32 GB | 64 GB | More for blockchain nodes |
| Storage | ODF or CSI | ODF | Must support RWX for some services |
Security Context Constraints
DALP images run as non-root with no privilege escalation. All pods satisfy the restricted-v2 SCC without modification.
Required security settings
All DALP workloads include these security context settings:
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefaultUser ID handling
OpenShift assigns arbitrary UIDs from each project's UID range. DALP charts set runAsUser: null to allow OpenShift's admission controller to inject the appropriate UID. Do not specify explicit UIDs in your values overrides.
Networking with Routes
OpenShift uses Routes instead of Kubernetes Ingress for external access. DALP charts automatically detect OpenShift and create Routes when the route.openshift.io/v1 API is available.
Route configuration
Enable Routes in your values file:
# DALP dApp Route
dapp:
openShiftRoute:
enabled: true
host: dalp.apps.example.com
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
# Blockscout Route
blockscout:
openShiftRoute:
enabled: true
host: explorer.apps.example.com
tls:
termination: edge
insecureEdgeTerminationPolicy: RedirectTLS termination options
| Option | Use case | Notes |
|---|---|---|
| edge | Standard HTTPS termination | Router terminates TLS, backend uses HTTP |
| passthrough | End-to-end encryption | TLS passes to pod, requires cert in pod |
| reencrypt | Internal encryption with own certs | Router terminates and re-encrypts to pod |
Most deployments should use edge termination with OpenShift's wildcard certificate or a custom certificate.
Storage configuration
OpenShift Data Foundation (ODF)
For production deployments, OpenShift Data Foundation provides:
- Ceph-based distributed storage
- RWX support for shared volumes
- Built-in replication and recovery
- S3-compatible object storage
Storage class selection
global:
storageClass: ocs-storagecluster-ceph-rbd # ODF block storageFor object storage (RustFS alternative):
rustfs:
enabled: false # Use ODF object storage instead
objectStorage:
endpoint: s3://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
bucket: dalp-assets
existingSecret: dalp-s3-credentialsOperator integration
OpenShift operators simplify management of supporting infrastructure.
CloudNativePG
The CloudNativePG operator works on OpenShift without modification. Install via OperatorHub or Helm.
Velero
For backup and disaster recovery, Velero integrates with OpenShift through the OADP (OpenShift API for Data Protection) operator available in OperatorHub.
NetworkPolicy considerations
OpenShift Network Policies work identically to Kubernetes, with one addition: the OpenShift Router requires explicit ingress rules.
DALP charts automatically add router access when running on OpenShift:
# Automatically included when route.openshift.io/v1 API is detected
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingressInstallation steps
Step 1: Prepare the project
# Create project (namespace)
oc new-project dalp-production
# Verify SCC assignment
oc get scc restricted-v2 -o yamlStep 2: Add Helm repository
helm repo add settlemint https://harbor.settlemint.com/chartrepo/dalp
helm repo updateStep 3: Configure values
Create an OpenShift-specific values file by merging the base values-openshift.yaml with your environment configuration:
# values-production.yaml
global:
platform: openshift
storageClass: ocs-storagecluster-ceph-rbd
# Enable Routes for all services
dapp:
openShiftRoute:
enabled: true
host: dalp.apps.example.com
blockscout:
openShiftRoute:
enabled: true
host: explorer.apps.example.com
# Disable Traefik (OpenShift Router handles ingress)
traefik:
enabled: falseStep 4: Install the chart
helm install dalp settlemint/dalp \
-n dalp-production \
-f values-openshift.yaml \
-f values-production.yamlStep 5: Verify deployment
# Check pod status
oc get pods -n dalp-production
# Verify Routes
oc get routes -n dalp-production
# Check Route TLS
oc get route dalp -n dalp-production -o jsonpath='{.spec.tls.termination}'Troubleshooting
Pod fails with SCC denied
Verify your deployment does not specify explicit UIDs:
oc get deployment <name> -o yaml | grep -A5 securityContextRemove any runAsUser with explicit values. Use null or omit the field entirely.
Route not accessible
Check that the Route hostname resolves correctly:
oc get route <name> -o jsonpath='{.spec.host}'
nslookup <hostname>Verify the router pod is running:
oc get pods -n openshift-ingressStorage provisioning fails
Confirm the storage class exists and is default:
oc get storageclass
oc get pvc -n dalp-productionFor ODF, verify the storage cluster is healthy:
oc get storagecluster -n openshift-storageSee also
- Prerequisites for infrastructure requirements
- Installation process for deployment steps
- High availability for HA/DR configurations
Installation process
Overview of the SettleMint-managed installation process for self-hosted DALP deployments. Covers pre-installation verification, deployment phases, and post-installation handoff.
High Availability
HA and DR philosophy for self-hosted DALP deployments. Covers RTO/RPO/RTT definitions and a scenario selection guide to help you choose the right deployment pattern.