Kasten Backup Summary Dashboard
A self-hosted web dashboard for monitoring Kasten K10 backup jobs across multiple Kubernetes clusters. Built with Veeam branding, deployed as a single container on OpenShift, querying Kubernetes CRDs directly — no Kasten API credentials required.
https://github.com/jdtate101/kasten-summary-service


Features
- Multi-cluster summary — pass/fail run counts per cluster at a glance
- Cluster info panels — K8s/OCP version, node count, CPU, memory, pods, PVCs, VMs (KubeVirt), storage CSIs
- Drill-down detail view — backup jobs rolled up by run action, expandable to show individual namespace actions
- Failed job error messages — root cause extracted from Kasten's nested error chain
- Restore actions — last 30 days of restore activity per cluster
- Namespace health strip — last snapshot time per namespace with stale/failed indicators
- Search & filter — filter by policy name, namespace, or run action name
- Health check endpoint —
/api/healthzfor Uptime Kuma, Prometheus, or any HTTP monitor - Export to PDF — browser print stylesheet for clean PDF export
- Dynamic cluster config — add clusters via ConfigMap/Secret, no rebuild required
- Auto-refresh — summary page refreshes every 30 seconds
- Dark or Light Mode
- Policy Comp;iance View — historical over the last 7 days
- VM View — summary of VM details with in depth breakout
- Policies Audit View - List of policy abd preset alignment.
Architecture
┌─────────────────────────────────────────────────────┐
│ Alpine Container │
│ │
│ ┌──────────────────┐ ┌───────────────────────┐ │
│ │ Nginx :8080 │───▶│ FastAPI :8000 │ │
│ │ (static files) │ │ (backend proxy) │ │
│ └──────────────────┘ └───────────────────────┘ │
│ │ │
└────────────────────────────────────┼─────────────────┘
│
┌──────────────────────┼──────────────────────┐
▼ ▼ ▼
K8s API (in-cluster) K8s API (RKE2) K8s API (K3s)
Kasten CRDs Kasten CRDs Kasten CRDs
Nginx serves the static frontend on port 8080 and proxies /api/* requests to the FastAPI backend running on localhost:8000. The backend queries each cluster's Kubernetes API directly using service account tokens, reading Kasten's custom resources (CRDs) rather than using the Kasten web API.
Project Structure
kasten-dashboard/
├── Dockerfile # Alpine-based container image
├── start.sh # Entrypoint: starts uvicorn then nginx
├── README.md
│
├── backend/
│ ├── kasten.py # Core K8s/Kasten API client + data parsing
│ ├── main.py # FastAPI app — routes and endpoints
│ └── requirements.txt # Python dependencies
│
├── frontend/
│ ├── index.html # SPA shell — summary, detail, health pages
│ ├── app.js # All frontend logic (routing, rendering, filters)
│ ├── style.css # Veeam-branded stylesheet
│ ├── favicon.png # Browser tab icon
│ └── logos/
│ ├── veeam.png # Header logo
│ ├── openshift.svg # Cluster card logo
│ ├── rke2.svg # Cluster card logo
│ ├── k3s.svg # Cluster card logo
│ └── fallback.svg # Fallback if logo missing
│
├── nginx/
│ └── nginx.conf # Nginx config — serves frontend, proxies /api/
│
└── k8s/
├── configmap.yaml # Cluster URLs and configuration (CLUSTER_N_* vars)
├── secret.yaml # SA tokens for remote clusters (CLUSTER_N_TOKEN)
├── deployment.yaml # Kubernetes Deployment
├── route.yaml # OpenShift Route (TLS edge termination)
└── clusterrole.yaml # RBAC for reading Kasten CRDs
Key Files Explained
| File | Purpose |
|---|---|
backend/kasten.py |
Discovers clusters from env vars, queries K8s API for Kasten CRDs (backupactions, exportactions, restoreactions, policies, policypresets), parses error chains, fetches cluster metadata |
backend/main.py |
FastAPI routes: /api/clusters, /api/summary, /api/cluster/{name}, /api/cluster-info/{name}, /api/healthz, /api/debug/{name} |
frontend/app.js |
Loads cluster list from API, renders summary cards and detail tables, handles rollup of jobs by run action, search/filter, namespace health strip, health page |
k8s/configmap.yaml |
Defines clusters using CLUSTER_N_* env vars — the only file you need to edit to add a cluster |
k8s/secret.yaml |
SA tokens for remote clusters — never commit real tokens |
Prerequisites
- OpenShift cluster where the dashboard will be hosted
- Kasten K10 installed in the
kasten-ionamespace on each cluster - Harbor (or any OCI registry) accessible from OpenShift
ocandkubectlCLI toolsdockerfor building the image
Deployment
1. Prepare Remote Clusters (K3s, RKE2, or any remote K8s)
Run these steps on each remote cluster that the dashboard will monitor.
Create the Service Account
kubectl -n kasten-io create serviceaccount dashboardbff-svc
Create the ClusterRole
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kasten-dashboard-reader
rules:
- apiGroups: ["actions.kio.kasten.io"]
resources: ["backupactions", "exportactions", "restoreactions"]
verbs: ["get", "list"]
- apiGroups: ["config.kio.kasten.io"]
resources: ["policies", "policypresets", "profiles"]
verbs: ["get", "list"]
EOF
Bind the ClusterRole to the Service Account
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kasten-dashboard-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kasten-dashboard-reader
subjects:
- kind: ServiceAccount
name: dashboardbff-svc
namespace: kasten-io
EOF
Create a Non-Expiring Token Secret
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: dashboardbff-svc-token
namespace: kasten-io
annotations:
kubernetes.io/service-account.name: dashboardbff-svc
type: kubernetes.io/service-account-token
EOF
Retrieve the Token
kubectl -n kasten-io get secret dashboardbff-svc-token \
-o jsonpath='{.data.token}' | base64 -d
Save this token — you'll need it in the Secret below.
Get the API Server URL
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'
2. Prepare the OpenShift Cluster (host cluster)
Create the ClusterRole
oc apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kasten-dashboard-reader
rules:
- apiGroups: ["actions.kio.kasten.io"]
resources: ["backupactions", "exportactions", "restoreactions"]
verbs: ["get", "list"]
- apiGroups: ["config.kio.kasten.io"]
resources: ["policies", "policypresets", "profiles"]
verbs: ["get", "list"]
- apiGroups: ["config.kio.kasten.io"]
resources: ["policypresets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes", "pods", "persistentvolumeclaims"]
verbs: ["get", "list"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list"]
- apiGroups: ["config.openshift.io"]
resources: ["clusterversions"]
verbs: ["get", "list"]
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachineinstances"]
verbs: ["get", "list"]
EOF
Bind to the Default Service Account
The pod uses the default SA in kasten-io (it receives the in-cluster token automatically):
oc apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kasten-dashboard-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kasten-dashboard-reader
subjects:
- kind: ServiceAccount
name: default
namespace: kasten-io
EOF
3. Build and Push the Image
# Clone the repo
git clone https://github.com/youruser/kasten-dashboard.git
cd kasten-dashboard
# Build
docker build -t harbor.your.domain/kasten-dashboard/kasten-dashboard:latest .
# Login to your registry (if required)
docker login harbor.your.domain
# Push
docker push harbor.your.domain/kasten-dashboard/kasten-dashboard:latest
If your registry uses a self-signed certificate:
# Get the full cert chain
openssl s_client -connect harbor.your.domain:443 -showcerts </dev/null 2>/dev/null \
| sed -n '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' > harbor-chain.crt
sudo mkdir -p /etc/docker/certs.d/harbor.your.domain
sudo cp harbor-chain.crt /etc/docker/certs.d/harbor.your.domain/ca.crt
sudo systemctl restart docker
4. Configure the ConfigMap
Edit k8s/configmap.yaml with your cluster API server URLs:
data:
CLUSTER_1_NAME: "openshift"
CLUSTER_1_LABEL: "OpenShift"
CLUSTER_1_API_URL: "https://kubernetes.default.svc"
CLUSTER_1_IN_CLUSTER: "true"
CLUSTER_1_LOGO: "openshift.svg"
CLUSTER_2_NAME: "rke2"
CLUSTER_2_LABEL: "RKE2"
CLUSTER_2_API_URL: "https://192.168.1.99:6443"
CLUSTER_2_LOGO: "rke2.svg"
CLUSTER_3_NAME: "k3s"
CLUSTER_3_LABEL: "K3s"
CLUSTER_3_API_URL: "https://192.168.1.105:6443"
CLUSTER_3_LOGO: "k3s.svg"
5. Configure the Secret
Edit k8s/secret.yaml with the tokens retrieved in Step 1. The token key must match CLUSTER_N_TOKEN where N corresponds to the cluster number in the ConfigMap:
stringData:
CLUSTER_2_TOKEN: "eyJhbGci..." # RKE2 token
CLUSTER_3_TOKEN: "eyJhbGci..." # K3s token
⚠️ Never commit real tokens to git. Use kubectl create secret directly or a secrets manager.6. Update the Image Reference
Edit k8s/deployment.yaml and set the image to your registry path:
image: harbor.your.domain/kasten-dashboard/kasten-dashboard:latest
7. Deploy to OpenShift
oc apply -f k8s/configmap.yaml
oc apply -f k8s/secret.yaml
oc apply -f k8s/deployment.yaml
oc apply -f k8s/route.yaml
If your registry uses a self-signed cert, add it to OpenShift's pull trust first:
oc create configmap harbor-ca \
--from-file=harbor.your.domain=/path/to/selfsignCA.crt \
-n openshift-config
oc patch image.config.openshift.io/cluster --type=merge \
-p '{"spec":{"additionalTrustedCA":{"name":"harbor-ca"}}}'
Verify deployment
oc get pods -n kasten-io | grep dashboard
oc get route -n kasten-io kasten-dashboard
Adding a New Cluster
No rebuild is required. All changes are made to the ConfigMap and Secret.
Step 1 — Prepare the new cluster
Follow the steps in Section 1 above on the new cluster to create the service account, ClusterRole, ClusterRoleBinding, and token secret.
Step 2 — Add a logo (optional)
Drop an SVG logo into frontend/logos/ (e.g. harvester.svg), rebuild and push the image. If no logo is provided the fallback SVG will be used.
Step 3 — Update the ConfigMap
oc edit configmap kasten-dashboard-config -n kasten-io
Add the new cluster entries — increment N to the next available number:
CLUSTER_4_NAME: "harvester"
CLUSTER_4_LABEL: "Harvester"
CLUSTER_4_API_URL: "https://192.168.1.200:6443"
CLUSTER_4_LOGO: "harvester.svg"
Step 4 — Add the token to the Secret
oc edit secret kasten-dashboard-secrets -n kasten-io
Add the base64-encoded token (or use stringData with kubectl apply):
stringData:
CLUSTER_4_TOKEN: "eyJhbGci..."
Step 5 — Restart the pod
oc rollout restart deployment/kasten-dashboard -n kasten-io
The new cluster will appear in the dashboard automatically.
Health Check / Monitoring Integration
The /api/healthz endpoint is designed for external monitoring tools.
| HTTP Status | Meaning |
|---|---|
200 OK |
All clusters reachable and Kasten API responding |
207 Multi-Status |
At least one cluster reachable (degraded) |
503 Service Unavailable |
No clusters reachable |
Example response:
{
"status": "healthy",
"timestamp": "2026-04-02T08:00:00Z",
"clusters": {
"openshift": {
"reachable": true,
"k8s_version": "v1.31.14",
"response_ms": 12,
"kasten_api": "ok"
}
}
}
Uptime Kuma setup:
- Monitor type:
HTTP(s) - URL:
https://kasten-dashboard-kasten-io.apps.your.domain/api/healthz - Expected status:
200
API Reference
| Endpoint | Description |
|---|---|
GET /api/clusters |
List of configured clusters (id, label, logo) |
GET /api/summary |
Pass/fail run counts for all clusters |
GET /api/cluster/{name} |
Full job and restore detail for a cluster |
GET /api/cluster-info/{name} |
Cluster metadata (nodes, pods, PVCs, VMs, etc.) |
GET /api/healthz |
Health check for monitoring tools |
GET /api/health |
Simple liveness probe |
GET /api/debug/{name}?path=... |
Raw K8s API proxy for troubleshooting |
Updating the Dashboard
# Make code changes, then:
docker build -t harbor.your.domain/kasten-dashboard/kasten-dashboard:latest .
docker push harbor.your.domain/kasten-dashboard/kasten-dashboard:latest
oc rollout restart deployment/kasten-dashboard -n kasten-io
Troubleshooting
Cluster shows "Unreachable"
- Check the API URL in the ConfigMap is correct and reachable from the pod
- Verify the token in the Secret is valid and not expired
- Test connectivity:
oc exec -n kasten-io deployment/kasten-dashboard -- curl -k https://<api-url>/version
Cluster shows 0 jobs
- Verify the ClusterRole is bound correctly:
kubectl get clusterrolebinding kasten-dashboard-reader - Check RBAC:
oc exec -n kasten-io deployment/kasten-dashboard -- curl -k -H "Authorization: Bearer <token>" https://<api>/apis/actions.kio.kasten.io/v1alpha1/backupactions
Image pull errors on OpenShift
- Add your registry's CA cert to OpenShift's additional trusted CAs (see Step 7)
- Verify the Harbor project is public or configure an image pull secret
Pod logs
oc logs -n kasten-io deployment/kasten-dashboard
Debug a specific cluster API call
curl -k https://<dashboard-route>/api/debug/openshift?path=apis/actions.kio.kasten.io/v1alpha1/backupactions
Tech Stack
| Component | Technology |
|---|---|
| Container base | Python 3.12 Alpine |
| Backend | FastAPI + uvicorn |
| HTTP client | httpx (async) |
| Frontend | Vanilla JS, HTML5, CSS3 |
| Web server | Nginx |
| Process management | Shell script (start.sh) |
License
MIT