How to Configure NFS Repository for Kasten K10

One of the options that many companies use to host their backups is NFS, in this guide, we will review how to configure an NFS Profile to be used by Kasten K10, in accordance with the good practices that are indicated in the documentation of Kasten.

Documentation

First of all we must always review the documentation of the technologies that we will use to achieve our objective, in this case we will use Kasten K10 and NFS Subdir External Provisioner, the official information can be found:

Requirements

For the use of NFS as a profile in Kasten K10, as the documentation indicates, we will need to comply with the following:

  • NFS service accessible from all nodes where it is installed K10
  • A shared folder via NFS, which can be mounted on all nodes where it is installed K10
  • A PV defining the NFS shared folder
  • A PVC with its respective StorageClassName for k10

Complying with the previous requirements we will have our NFS profile correctly configured to host our backups.

NFS Folder Configuration

Like any NFS server, it is necessary to create a folder or use an existing one to host the backups, always with its respective access configuration either by Authentication or allowing access by HOST in NFS, for example in my QNAP I have configured the following:

After having this configured, we will move on to the installation and configuration of the NFS Subdir External Provisioner

Installation NFS Subdir External Provisioner

Again, according to the solution documentation, the first thing we must do is configure the helm repository, so we must execute the following command:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

To then configure with helm and its respective information of the NFS server to use:

Validation of the installation with:

kubectl get pods

Storage Class Configuration

To meet the requirements of Kasten K10, we must have a StorageClass, in fact, when installing and configuring with helm, the StorageClass is created automatically:

kubectl get sc

Creating Persistent Disk using NFS

Now, since we have everything, we must test the creation of the PVC using our new StorageClass, for this we will execute the following (modify size and name if necessary):

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: prueba-disco-nfs
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi

Validate configuration in kubernetes and also in our NFS shared folder:

We will now delete this disk to prepare a disk needed for Kasten K10:

kubectl delete pvc prueba-disco-nfs

Now we will create the necessary disk for Kasten K10 in its own namespace, with the following file:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: repo-nfs-respaldos
  namespace: kasten-io
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi

If we list the PVCs without namespace, we will see that none exist:

kubectl get pvc

Now, if we list the PVCs with the namespace kasten-io we will see our new disk:

kubectl get pvc -n kasten-io

NFS profile configuration Kasten K10

Now, we will enter the console Kasten K10 in the cluster that we configured our NFS, entering the name of the PVC:

We validate the configuration:

We will now test a backup to this new NFS profile to host the backups.

Test Run Backup to NFS

For this, we just need to create some backup policy using our new profile:

Then we Execute it and wait for the completion:

We validate the backup in our NFS:

And finally a recovery test in another namespace in this case, nfspacman:

In the dashboard of Kasten k10 we will see:

And in kubernetes:

And finally the pacman application working as expected:

 

add a comment

Your email address will not be published. Required fields are marked *