Kasten K10 Guide for beginners - Part 1
In order to make it easier for customers to get up and running with a test cluster scenario, ie lab setup, I decided to script up the install process and make it as hands off as possible. I’ve hosted this all on GitHub so it can be called via a one-liner. The script does the following:
.
- Update Ubuntu
- Install k3s single node cluster
- Install ZFS
- Install OpenEBS ZFS operator, ZFS StorageClass and k10 annotated VolumeSnapShotClass
- Install HELM
- Install Kasten K10
- Expose the K10 Dashboard on external LoadBalancer port and display the IP/Port to access the dashboard.
To run this cluster you will need an Ubuntu 22.04 LTS VM, either on a local hypervisor or in the cloud. It needs to have at least dual core, 4GB RAM, 30GiB boot disk and 100GiB second disk (for the ZFS volume). At the start of the script it will ask for the device path of the second volume, this can be found by using the fdisk utility on the VM:
fdisk -l
This will be something like /dev/sdb. Once you have this, run the script by using the following command as root on the VM (via sudo su):
curl -s https://raw.githubusercontent.com/jdtate101/jdtate101/main/zfs-install-script.sh | bash
This will pull down the script and execute it, installing all the steps above. The format of the script is here:
#! /bin/bash
R='\033[0;31m' #'0;31' is Red's ANSI color code
G='\033[0;32m' #'0;32' is Green's ANSI color code
W='\033[1;37m' #'1;37' is White's ANSI color code
# the following command will set the ubuntu service restart under apt to automatic
sed -i 's/#$nrconf{restart} = '"'"'i'"'"';/$nrconf{restart} = '"'"'a'"'"';/g' /etc/needrestart/needrestart.conf
echo -e "$R ____ ___ ___ __ ____ _____ "
echo -e "$R| |/ _|____ _______/ |_ ____ ____ | | _/_ \ _ \ "
echo -e "$R| < \__ \ / ___/\ __\/ __ \ / \ ______ | |/ /| / /_\ \ "
echo -e "$R| | \ / __ \_\___ \ | | \ ___/| | \ /_____/ | < | \ \_/ \ "
echo -e "$R|____|__ (____ /____ > |__| \___ >___| / |__|_ \|___|\_____ / "
echo -e "$R \/ \/ \/ \/ \/ \/ \/ "
echo -e "$G Simple K10 node installer.....!"
echo ""
echo -e "$G This will install a single node k3s cluster with zfs-csi driver, zfs storageclass and k10 annotated volumesnapshotclass"
echo -e "$G It will then install k10 via HELM and automatically expose the k10 dashboard on the cluster load balancer"
echo ""
echo -e "$G Enter drive path of extra volume (ie /dev/sdb). If you do not know this exit this script by cmd-x and run "fdisk -l" to find the drive path: "
echo -e "$W "
read DRIVE < /dev/tty
sleep 5
echo ""
echo -e "$G Patching Ubuntu"
echo ""
sleep 5
echo -e "$W "
pro config set apt_news=false
apt update && apt upgrade -y && apt dist-upgrade -y && apt autoremove -y
sleep 5
echo -e "$G Installing k3s single node cluster"
echo -e "$W "
sleep 5
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable local-storage" sh -s -
sleep 5
echo ""
echo -e "$G Installing ZFS and configuring pool"
echo -e "$W "
sleep 5
apt install zfsutils-linux -y
zpool create kasten-pool $DRIVE
sleep 5
echo ""
echo -e "$G Installing ZFS Operator, StorageClass & VolumeSnapshotClass"
echo -e "$W "
sleep 5
kubectl apply -f https://openebs.github.io/charts/zfs-operator.yaml
curl -s https://raw.githubusercontent.com/jdtate101/jdtate101/main/zfs-sc.yaml > zfs-sc.yaml
curl -s https://raw.githubusercontent.com/jdtate101/jdtate101/main/zfs-snapclass.yaml > zfs-snapclass.yaml
kubectl apply -f zfs-sc.yaml
kubectl apply -f zfs-snapclass.yaml
kubectl patch storageclass kasten-zfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
sleep 5
echo ""
echo -e "$G Installing Helm"
echo -e "$W "
sleep 5
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod +x ./get_helm.sh
./get_helm.sh
cp /etc/rancher/k3s/k3s.yaml /root/.kube/config
helm repo add kasten https://charts.kasten.io
sleep 5
echo ""
echo -e "$G Installing Kasten K10"
echo -e "$W "
sleep 5
kubectl create ns kasten-io
helm install k10 kasten/k10 --namespace kasten-io
echo ""
echo -e "$R Please wait for 60sec whilst we wait for the pods to spin up..."
echo -e "$R After this period the external URL for K10 access will display (DO NOT exit this script)"
sleep 60
echo -e "$W "
pod=$(kubectl get po -n kasten-io |grep gateway | awk '{print $1}' )
kubectl expose po $pod -n kasten-io --type=LoadBalancer --port=8000 --name=k10-dashboard
ip=$(curl -s ifconfig.io)
port=$(kubectl get svc -n kasten-io |grep k10-dashboard | cut -d':' -f2- | cut -f1 -d'/' )
echo ""
echo -e "$G K10 dashboard can be accessed on http://"$ip":"$port"/k10/#/"
echo -e "$W "
echo -e "$R It may take a while for all pods to become active. You can check with $G < kubectl get po -n kasten-io > $R wait for the gateway pod to go 1/1 before you go to the URL"
echo -e "$W "
exit
Additionally you can install a simple web application to test by installing the following deployment manifest:
kubectl create ns nginx
kubectl apply -f https://raw.githubusercontent.com/jdtate101/jdtate101/main/hol-app-manifest.yaml -n nginx
This installs a custom image of base ubuntu with nginx and ssh installed. The nginx is on port 81 and ssh is on port 2222. SSH credentials are:
Username: veeam
Password: xCFYkQ5bXR
The password can be changed once the container is up, but it will not persist across reboots, as this is set in the DOCKERFILE for the image. I have another post on how this image is created and you can use that as a basis for your own.
apiVersion: v1
kind: Service
metadata:
name: useless-webserver
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- name: web
protocol: TCP
port: 81
- name: ssh
protocol: TCP
port: 2222
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: useless-pv-claim
labels:
app: nginx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: useless-webserver
spec:
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: useless-pvc
persistentVolumeClaim:
claimName: useless-pv-claim
containers:
- name: nginx
image: jdtate101/kasten-app:latest
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: useless-pvc
mountPath: /var/www/html
lifecycle:
postStart:
exec:
command: ["sh", "-c", "chown -R 999:999 /var/www/html"]
You can log into the image via ssh (use kubectl get svc -A to find the ports) and upload html into the root webserver dir (/var/www/html). This stores all HTML code on the persistent volume, so changes are safe across reboots and application moves to other k8 clusters.
note: The base html dir will initially be empty so if you browse to the IP:port you will get a 404 error. You need to upload come content before you can view a webpage.
if you ssh to the web root (/var/www/html) and run the following it will pull down a basic index.html file:
curl -s https://raw.githubusercontent.com/jdtate101/jdtate101/main/index.html > index.html
You should then see a basic Example page. Upload your own content to test backup and restores for k10.
You can then log on to k10 and configure as needed using the link provided by the script. Remember to setup and external location for backup export and k10 DR.
Refer to the kasten docs for more info: http://docs.kasten.io