So, you have successfully setup your home K3s/Kubernetes cluster. And it is time to deploy some useful applications on the cluster?
How about a file server for your home network? Chances are you already have a disk with lots of files that you want to migrate to your K3s cluster.
So, in this scenario we will take a USB disk with it contents and attach it to one of the worker nodes (e.g. node 'raspi3'). We will label this node (e.g. with 'hdd2') so the deployment can be configured to only be installed on the node with the external disk. Plug in your disk and mount it. To mount it create a mount point with
sudo mkdir /media/hdd2 on the node/Raspberry running the node. Find the disk's UUID with
sudo blkid. Add entries to this Raspi's /etc/fstab file:
UUID=<partition uuid> /media/hdd2 ext4 defaults,noatime,nodiratime 0 2
Next label the Kubernetes node (kubectl label nodes
kubectl label nodes raspi3 disk=hdd2
We need to define a username and password to use when connecting from the clients to the sambe/file server. Let's create a
kustomization.yaml for the samba server and create a
secretGenerator with it. For better security we want to keep the credentials separate from the rest of the configuration. First create two files called
Replace the yourusername and yourpassword with the credentials you want to use.
Now, create the
kustomization.yaml. First we will create the secretGenerator and then define that this kustomization will import the files deployment.yaml and service.yaml that we will create in a moment.
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: smbcredentials files: - username - password resources: - deployment.yaml - service.yaml
The secret generator will take the username from the
username file and the password from the
password file. This comes in handy when we want to store our configuration in GitHub and share it with others. It allows us to share the kustomization.yaml, deployments.yaml and service.yaml with the world without exposing our credentials.
kind: Deployment apiVersion: apps/v1 metadata: name: smb-server spec: replicas: 1 selector: matchLabels: app: smb-server template: metadata: name: smb-server labels: app: smb-server spec: volumes: - name: smb-volume hostPath: # directory location on host path: /media/hdd2 type: Directory containers: - name: smb-server image: dperson/samba env: - name: PERMISSIONS value: "" - name: USER valueFrom: secretKeyRef: name: smbcredentials key: user - name: SHARE value: "share;/smbshare/;yes;no;no;all;none" volumeMounts: - mountPath: /smbshare name: smb-volume ports: - containerPort: 445 nodeSelector: disk: hdd2
nodeSelector: we make sure this deployment will only be rolled out on the worker node with the label "disk=hdd2" (that has the usb hard disk attached).
And with the
hostPath: part we access the local folder "/media/hdd2" where we mounted the disk.
service.yaml that we use to expose the port 445 from the samba server via a load balancer:
apiVersion: v1 kind: Service metadata: name: smb-server spec: type: LoadBalancer ports: - port: 445 protocol: TCP name: nfs selector: app: smb-server
To deploy everything cd into the directory with the yaml and credential files (kustomization.yaml, deployment.yaml, service.yaml, username, and password) and run:
kubectl apply -k ./
Once the deployment has been successfully rolled out you should be able to connect to you smb/samba server (use any node's external ip - the loadbalancer should forward the request to the correct node).