git clone https://github.com/ntap-johanneswagner/demostuff
cd demostuff/01prework
bash prework sh
alias: rke1 => change kubeconfig to rke1 rke2 => change kubeconfig to rke2 active => which kubeconfig is active?
san eco and san sc are in there
also two pvcs
firstpvc.yaml will create a pvc on san-eco
secondpvc.yaml trys to create rwx on san-eco which will fail. change sc or access mdoe to fix it - right sc is "sc-nas-svm1"
cd /home/user/demostuff/02scandpvc
kubectl apply -f rke1_sc_saneco.yaml
kubectl apply -f rke1_sc_san.yaml
kubectl create namespace pvctest
kubectl apply -f firstpvc.yaml -n pvctest
kubectl apply -f secondpvc.yaml -n pvctest
kubectl get pvc -n pvctest
kubectl describe pvc secondpvc -n pvctest
cd /home/user/demostuff/03pacman
this is a simple app with db and some fun game to demonstrate acc later
kubectl create namespace pacman
cat pacman-mongodb-pvc.yaml
kubectl apply -f pacman-mongodb-pvc.yaml -n pacman
You can verfiy that the pvc is there and bound:
kubectl get pvc -n pacman
Now as the PVC is there, have a look at the file for the database deployment and create it afterwards:
cat pacman-mongodb-deployment.yaml
kubectl apply -f pacman-mongodb-deployment.yaml -n pacman
You should be able to see the container running:
kubectl get pods -n pacman
We have a running container, we have storage for the mongodb, now we need a service that the app can access the database. Have a look at the file and create them afterwards:
cat pacman-mongodb-service.yaml
kubectl apply -f pacman-mongodb-service.yaml -n pacman
You should have now one service in your namespace:
kubectl get svc -n pacman
Let's continue with the pacman application, we will start with the deployment, as there is no need for storage (remeber: we will store the data in the database). Have look at the file for the deployment and create it afterwards:
cat pacman-app-deployment.yaml
kubectl apply -f pacman-app-deployment.yaml -n pacman
You should be able to see the containers running:
kubectl get pods -n pacman
We have running containers, we have storage for the mongodb, we have connection between mongodb and the pacman application guess what is missing: A service to access pacman. Have a look at the 8000 files for the service and create it afterwards:
cat pacman-app-service.yaml
kubectl apply -f pacman-app-service.yaml -n pacman
Finaly let's check the services:
kubectl get svc -n pacman
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongo LoadBalancer 172.26.223.182 192.168.0.215 27017:30967/TCP 16s
pacman LoadBalancer 172.26.164.251 192.168.0.216 80:30552/TCP 14s
In my example, Pac-Man recieved the external IP 192.168.0.216. This IP adress may vary in your environment. Take the IP adress from your output, open the webbrowser in your jumphost and try to access Pac-Man
Have some fun, create some highscore, we will need that in a later lab.
cd /home/user/demostuff/04snapshot
CSI Snapshots have been promoted GA with Kubernetes 1.20.
While snapshots can be used for many use cases, we will explore 2 different ones, which share the same initial process:
- Restore the snapshot in the current application
- Create a new POD which uses a PVC created from the snapshot (cloning)
There is also a chapter that will show you the impact of deletion between PVC, Snapshots & Clones (spoiler alert: no impact).
We would recommended checking that the CSI Snapshot feature is actually enabled on this platform.
This link is a good read if you want to know more details about installing the CSI Snapshotter. It is the responsibility of the Kubernetes distribution to provide the snapshot CRDs and Controller. Unfortunately some distributions do not include this. Therefore verify (and deploy it yourself if needed).
In our lab the CRD & Snapshot-Controller to enable this feature have already been installed. Let's see what we find:
kubectl get crd | grep volumesnapshot
will show us the crds
volumesnapshotclasses.snapshot.storage.k8s.io 2020-08-29T21:08:34Z
volumesnapshotcontents.snapshot.storage.k8s.io 2020-08-29T21:08:55Z
volumesnapshots.snapshot.storage.k8s.io 2020-08-29T21:09:13Z
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{range .spec.containers[*]}{.image}{", "}{end}{end}' | grep snapshot-controller
will show us the snapshot controller is running in our cluster:
k8s.gcr.io/sig-storage/snapshot-controller:v4.2.0,
k8s.gcr.io/sig-storage/snapshot-controller:v4.2.0,
Aside from the 3 CRDs & the Controller StatefulSet, the following objects have also been created during the installation of the CSI Snapshot feature:
- serviceaccount/snapshot-controller
- clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner
- clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role
- role.rbac.authorization.k8s.io/snapshot-controller-leaderelection
- rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection
Finally, you need to have a VolumeSnapshotClass object that connects the snapshot capability with the Trident CSI driver. In this Lab there is already one:
kubectl get volumesnapshotclass
Note that the deletionpolicy parameter could also be set to Retain.
The volume snapshot feature is now ready to be tested.
The following will walk you through the management of snapshots with a simple lightweight BusyBox container.
We've prepared all the necessary files for you to save a little time. Please prepare the environment with the following commands:
kubectl create namespace busybox
kubectl apply -n busybox -f busybox.yaml
kubectl get -n busybox all,pvc
The last line will provide you an output of our example environment. There should be one running pod and a pvc with 10Gi.
Before we create a snapshot, let's write some data into our volume.
kubectl exec -n busybox $(kubectl get pod -n busybox -o name) -- sh -c 'echo "There is persistent data in k8s" > /data/test.txt'
This creates the file test.txt and writes "NetApp Kompakt Live Lab 2023 is fun. I will never use anything other than Astra Trident for persistent storage in K8s" into it. You can verify the file contents:
kubectl exec -n busybox $(kubectl get pod -n busybox -o name) -- more /data/test.txt
Creating a snapshot of this volume is very simple:
kubectl apply -n busybox -f pvc-snapshot.yaml
After it is created you can observe its details:
kubectl get volumesnapshot -n busybox
Your snapshot has been created !
To experiment with the snapshot, let's delete our test file...
kubectl exec -n busybox $(kubectl get pod -n busybox -o name) -- rm -f /data/test.txt
If you want to verify that the data is really gone, feel free to try out the command from above that has shown you the contents of the file:
kubectl exec -n busybox $(kubectl get pod -n busybox -o name) -- more /data/test.txt
One of the useful things K8s provides for snapshots is the ability to create a clone from it. If you take a look a the PVC manifest (pvc_from_snap.yaml), you can notice the reference to the snapshot:
dataSource:
name: mydata-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
Let's see how that turns out:
kubectl apply -n busybox -f pvc_from_snap.yaml
This will create a new pvc which could be used instantly in an application. You can see it if you take a look at the pvcs in your namespace:
kubectl get pvc -n busybox
Recover the data of your application
When it comes to data recovery, there are many ways to do so. If you want to recover only a single file, you can temporarily attach a PVC clone based on the snapshot to your pod and copy individual files back. Some storage systems also provide a convenient access to snapshots by presenting them as part of the filesystem (feel free to exec into the pod and look for the .snapshot folders on your PVC). However, if you want to recover everything, you can just update your application manifest to point to the clone, which is what we are going to try now:
kubectl patch -n busybox deploy busybox -p '{"spec":{"template":{"spec":{"volumes":[{"name":"volume","persistentVolumeClaim":{"claimName":"mydata-from-snap"}}]}}}}'
That will trigger a new POD creation with the updated configuration
Now, if you look at the files this POD has access to (the PVC), you will see that the lost data (file: test.txt) is back!
kubectl exec -n busybox $(kubectl get pod -n busybox -o name) -- ls -l /data/
or even better, lets have a look at the contents:
kubectl exec -n busybox $(kubectl get pod -n busybox -o name) -- more /data/test.txt
Tadaaa, you have restored your data!
Keep in mind that some applications may need some extra care once the data is restored (databases for instance). In a production setup you'll likely need a more full-blown backup/restore solution.
Now, a little clean up at the end:
kubectl delete ns busybox
#05 resizing
cd /home/user/demostuff/05resizing
Sometimes you need more space than you thought before. For sure you could create a new volume, copy the data and work with the new bigger PVC but it is way easier to just expand the existing.
First let's check the StorageClasses
kubectl get sc
Look at the column ALLOWVOLUMEEXPANSION. As we specified earlier, all StorageClasses are set to true, which means PVCs that are created with this StorageClass can be expanded.
NFS Resizing was introduced in K8S 1.11, while iSCSI resizing was introduced in K8S 1.16 (CSI)
Now let's create a PVC and a Centos POD using this PVC, in their own namespace called *resize".
kubectl create namespace resize
kubectl apply -n resize -f pvc.yaml
kubectl apply -n resize -f pod-busybox-nas.yaml
Wait until the pod is in running state - you can check this with the command
kubectl get pod -n resize
Finaly you should be able to see that the 5G volume is indeed mounted into the POD
kubectl -n resize exec busyboxfile -- df -h /data
Resizing a PVC can be done in different ways. We will edit the original yaml file of the PVC & apply it again it.
Look for the storage parameter in the spec part of the definition & change the value (in this example, we will use 15GB)
The provided command will open the pvc definition.
vi pvc.yaml
change the size to 15Gi like in this example:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi
storageClassName: storage-class-nas
volumeMode: Filesystem
you can insert something by pressing "i", exit the editor by pressing "ESC", type in :wq! to save&exit.
After this just apply the pvc.yaml file again
kubectl apply -n resize -f pvc.yaml
Everything happens dynamically without any interruption. The results can be observed with the following commands:
kubectl -n resize get pvc
kubectl -n resize exec busyboxfile -- df -h /data
This could also have been achieved by using the kubectl patch command. Try the following:
kubectl patch -n resize pvc pvc-to-resize-file -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
So increasing is easy, what about decreasing? Try to set your volume to a lower space, use the edit or the patch mechanism from above.
Click for the solution
kubectl patch -n resize pvc pvc-to-resize-file -p '{"spec":{"resources":{"requests":{"storage":"2Gi"}}}}'
Even if it would be technically possible to decrease the size of a NFS volume, K8s just doesn't allow it. So keep in mind: Bigger ever, smaller never.
If you want to, clean up a little bit:
kubectl delete namespace resize