In this article you will learn everything about the Longhorn storage solution.
Intro
Longhorn, developed by Rancher/SUSE, is a straightforward distributed storage system designed specifically for Kubernetes. It simplifies storage management within Kubernetes clusters, significantly easing the operational workload for users. Find out more here: Longhorn
This service is currently in early-access mode. Please contact us to get access.
Prerequisists
Cluster Ressources
To optimize your Xelon Kubernetes cluster for Longhorn, we recommend the following adjustments:
-
Enable the "Storage Pool" option: In Xelon HQ, activate the "Storage Pool" option to attach an additional disk to each node. This disk will be used by Longhorn to store data separately from the main boot disk. The Talos Linux boot partition is immutable and will be overwritten during any base system upgrade unless the
--preserve
flag is used. However, additional disks are not wiped, making it advisable to use a secondary disk for storage. -
Ensure Adequate Storage Pool Size: Your storage pool should consist of at least three nodes, as the default replication policy in Longhorn is set to three replicas. We do not recommend reducing the replication factor below this level.
-
Allocate Sufficient Resources: While Longhorn is lightweight, it still requires adequate resources. We recommend a minimum of 4 vCPUs and 8 GB of RAM for your storage nodes.
Mounting additional disk
To enable Longhorn to use the additional storage disk on your nodes, you need to manually mount the disk within the kubelet. This is not done by default. Follow these steps to mount the disk:
-
Create a patch file:
Create a patch file named longhorn.patch.yaml with the following content:
machine:
kubelet:
extraMounts:
- destination: /var/mnt/sdb
type: bind
source: /var/mnt/sdb
options:
- bind
- rshared
- rwThis configuration binds the additional storage disk to the specified directory (
/var/mnt/sdb
). -
Apply the patch to all storage nodes:
Use thetalosctl
command to apply this patch to all your storage nodes. Replace the IP addresses with those of your storage nodes:talosctl patch machineconfig --patch-file longhorn.patch.yaml -n 100.110.255.21,100.110.255.22,100.110.255.23
-
Locate the IPs of the storage nodes:
kubectl get nodes -o wide
This will display detailed information about your nodes, including their IP addresses, which you can use in the
talosctl
command
System extensions
Longhorn, like many distributed storage systems, relies on tools such as iSCSI and other Linux shell utilities (e.g. fstrim). However, this assumption about the host operating system is not always accurate for container-optimized OSes like Talos Linux.
To overcome this challenge, Talos Linux uses system extensions (also referred to as layers). These extensions add the required functionality to the host operating system, allowing Talos Linux to support the tools needed by Longhorn and similar storage systems.
Installing system extensions
To install the system extensions on a node, use the talosctl upgrade
command with the required image and schematic ID. Here's how you can do it:
-
Obtain the Schematic ID:
-
Visit the Talos Image Factory: Image Factory
-
Configure the following settings:
-
Hardware Type: Cloud Server
-
Talos Linux Version: For example v1.7.5 (You can verify your version in the HQ Cluster Dashboard or by running
kubectl get nodes -o wide
) -
Cloud Vendor: VMWare
-
System Extensions:
-
siderolabs/iscsi-tools
-
siderolabs/util-linux-tools
-
-
-
Skip modifying the kernel arguments and proceed to the next step.
-
You will be provided with a schematic ID, for example:
613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245
.
-
-
Install the Schematic ID on the Node:
-
Run the following command to upgrade your node with the system extensions:
talosctl upgrade --image factory.talos.dev/installer/<schematic ID>:<TalosOS Version>-m powercycle -n <NodeIP>
-
This process will apply the necessary system extensions to your Talos Linux nodes, enabling them to support the tools required by Longhorn and similar distributed storage systems.
Important considerations before running the upgrade command:
-
Node Reboot:
Running the talosctl upgrade command will cause the node to reboot. Ensure that your workloads are distributed across multiple nodes to avoid service interruptions. -
Boot Partition Overwrite:
The upgrade process will overwrite the boot partition with a new version of Talos Linux.
To retain the existing node storage, particularly on control plane nodes or other nodes where preserving storage is critical, you must include the--preserve
flag in the command.
Ensure that the new image is installed on all nodes that either host or attach to Longhorn storage volumes. This is essential for maintaining compatibility and functionality across your storage nodes.
Namespace
We recommend installing Longhorn in a dedicated namespace. Additionally, due to pod security requirements, you need to set a label on the namespace to allow the execution of containers with privileged access. You can accomplish this with the following commands:
kubectl create namespace longhorn-system
kubectl label namespace longhorn-system pod-security.kubernetes.io/enforce=privileged
These commands create the longhorn-system
namespace and label it to permit the execution of privileged containers, which is necessary for Longhorn to function properly.
Nodes
For more control over where Longhorn storage engines are set up, you can configure Longhorn to only use nodes with a specific label. During installation, set the option to limit Longhorn to nodes labeled with "node.longhorn.io/create-default-disk=true".
You can apply this label to a node with the following command:
kubectl label node cluster01-w-1-1 node.longhorn.io/create-default-disk=true
Replace cluster01-w-1-1
with the name of the node you wish to label. This ensures that Longhorn will only configure storage engines on nodes that have this specific label.
Longhorn Installation
To install Longhorn using the official Helm chart with custom values, follow these steps:
-
Create a Custom Values File:
Save the following configuration in a file named
custom-helm.values
:defaultSettings:
createDefaultDiskLabeledNodes: true
defaultDataPath: /var/mnt/sdb/
storageReservedPercentageForDefaultDisk: 0-
createDefaultDiskLabeledNodes
: Ensures that storage engines are only scheduled on nodes with the labelnode.longhorn.io/create-default-disk=true
. -
defaultDataPath
: Sets the default directory for Longhorn data on the node's disk. -
storageReservedPercentageForDefaultDisk
: Sets the percentage of disk storage reserved for the host operating system. Since you're using a separate disk, this is set to0
to use the entire disk.
-
-
Install or Upgrade Longhorn Using Helm:
Run the following Helm command to install or upgrade Longhorn with the custom values:
helm upgrade --values=doc-helm.values --install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --version 1.7.0
This command will:
-
--values=custom-helm.values
: Apply your custom configuration. -
--install
: Install Longhorn if it is not already installed. -
--upgrade
: Upgrade Longhorn if it is already installed. -
--namespace longhorn-system
: Deploy Longhorn in thelonghorn-system
namespace. -
--create-namespace
: Create the namespace if it does not already exist. -
--version 1.7.0
: Use the specified version of Longhorn.
-
This approach ensures that Longhorn is configured according to your requirements and deployed effectively in your Kubernetes cluster.
Verify Installation
To verify your Longhorn installation, you can check the Longhorn dashboard. Here's how to access it from your local machine:
-
Forward the Service Port:
Use
kubectl port-forward
to forward the Longhorn front-end service to a local port:kubectl port-forward svc/longhorn-frontend 12001:80 -n longhorn-system
This command forwards port
80
of thelonghorn-frontend
service to port12001
on your local machine. -
Access the Dashboard:
Open your web browser and navigate to:
You should see the Longhorn dashboard, where you can verify the installation and manage your Longhorn storage system.