OpenEBS Add-On

The OpenEBS add-on includes two options for provisioning volumes for PVCs: LocalPV and cStor.

Either provisioner may be selected as the default provisioner for the cluster by naming its storageclass default. In this example, the localPV provisioner would be used as provisioner for any PVCs that did not explicitly specify a storageClassName.

Advanced Install Options

    version: latest
    namespace: "space"
    isLocalPVEnabled: true
    localPVStorageClassName: default
    isCstorEnabled: true
    cstorStorageClassName: cstor

Flag Usage
version The version of OpenEBS to be installed.
namespace The namespace Open EBS is installed to.
isLocalPVEnabled Turn on localPV storage provisioning. localPV uses /var/openebs/local for storage and does not replicate data between nodes.
localPVStorageClassName StorageClass name for local PV provisioner (Name it “default” to make it the cluster’s default provisioner).
isCstorEnabled Turn on cStor storage provisioning. cStor relies on block devices and supports replicating data between nodes.
cstorStorageClassName The StorageClass name for cstor provisioner (Name it “default” to make it the cluster’s default provisioner).


The LocalPV provisioner uses the host filesystem directory /var/openebs/local for storage. PersistentVolumes provisioned with localPV will not be relocatable to a new node if a pod gets rescheduled. Data in these PersistentVolumes will not be replicated across nodes to protect against data loss. The localPV provisioner is suitable as the default provisioner for single-node clusters.


The cStor provisioner relies on block devices for storage. The OpenEBS NodeDeviceManager runs a DaemonSet to automatically incorporate available block devices into a storage pool named cstor-disk. The first available block device on each node in the cluster will automatically be added to this pool.


cStor is no longer supported for OpenEBS add-on versions 2.12.9+.

Adding Disks

After joining more nodes with disks to your cluster you can re-run the kURL installer to re-configure the cstor-disk storagepoolclaim, the storageclass, and the replica count of any existing volumes.

Storage Class

The kURL installer will create a StorageClass that initially configures cStor to provision volumes with a single replica. After adding more nodes with disks to the cluster, re-running the kURL installer will increase the replica count up to a maximum of three. The kURL installer will also check for any PVCs that were created at a lower ReplicaCount and will add additional replicas to bring those volumes up to the new ReplicaCount.

kind: StorageClass
  name: cstor
  annotations: cstor |
      - name: StoragePoolClaim
        value: "cstor-disk"
      - name: ReplicaCount
        value: "1"


Use kubectl get disks to list all the disks detected by the Node Device Manager. The Node Device Manager will ignore any disks that are already mounted or that match loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-.

It is critical to ensure that disks are attached with a serial number and that this serial number is unique across all disks in the cluster. On GCP, for example, this can be accomplished with the --device-name flag.

gcloud compute instances attach-disk my-instance --disk=my-disk-1 --device-name=my-instance-my-disk-1

Use the lsblk command with the SERIAL output columnm to verify on the host that a disk has a unique serial number:

lsblk -o NAME,SERIAL

For each disk in kubectl get disks there should be a corresponding blockdevice resource in kubectl -n openebs get blockdevices. (It is possible to manually configure multiple blockdevice resources for a partitioned disk but that is not supported by the kURL installer.)

For each blockdevice that is actually being used by cStor for storage there will be a resource listed under kubectl -n openebs get blockdeviceclaims. The kURL add-on uses an automatic striped storage pool, which can make use of no more than one blockdevice per node in the cluster. Attaching a 2nd disk to a node, for example, would trigger creation of disk and blockdevice resources, but not a blockdeviceclaim.

The kURL installer will create a storagepoolclaim resource named cstor-disk. For the initial install, kURL will use this spec for the storagepoolclaim:

    blockDeviceList: null
  maxPools: 1
  minPools: 1
  name: cstor-disk
    cacheFile: ""
    overProvisioning: false
    poolType: striped
  type: disk

The blockDeviceList: null setting indicates to OpenEBS that this is an automatic pool. Blockdevices will automatically be claimed for the pool up to the value of maxPools. If no blockdevices are available, the kURL installer will prompt for one to be attached and wait. After joining more nodes with disks to the cluster, re-running the kURL installer will increase the maxPools level.

Each PVC provisioned by cStor will have a corresponding cstorvolume resource in the openebs namespace.

kubectl -n openebs get cstorvolumes

The cstorvolume name will be identical to the PersistentVolume name created for the PVC once bound.

For each cstorvolume there will be 1 to 3 cstorvolumereplicas in the openebs namespace.

kubectl -n openebs get cstorvolumereplicas

The number of replicas should match the ReplicaCount configured in the StorageClass, which kURL increases as more nodes with disks are added to the clsuter.