OpenEBS Releases

Release VersionNotesHighlights

1.1.0

Latest Release
(Recommended)
Release Notes
Upgrade Steps
- Support for an alpha version of CSI driver with limited functionality for provisioning and de-provisioning of cStor volumes.
- Support for the upgrade of OpenEBS storage pools and volumes through Kubernetes Job. As a user, you no longer have to download scripts to upgrade from 1.0 to 1.1, like in earlier releases.
- Enhanced Prometheus metrics exported by Jiva for identifying whether an iSCSI Initiator is connected to Jiva target.
- Enhanced NDM operator capabilities for handling NDM CRDs installation and upgrade. Earlier this process was handled through maya-apiserver.
- Enhanced velero-plugin to take backup based on the openebs.io/cas-type:cstor and it will skip backup for unsupported volumes(or storage providers).
- Enhanced velero-plugin to allow users to specify a backupPathPrefix for storing the volume snapshots in a custom location. This allows users to save/backup configuration and volume snapshot data under the same location.
- Added an ENV flag which can be used to disable default config creation. The default storage configuration can be modified after installation, but it is going to be overwritten by the OpenEBS API Server.The recommended approach for customizing is to create their own storage configuration using the default options as examples/guidance.
- Fixes an issue where rebuilding cStor volume replica failed if the cStor volume capacity was changed after the initial provisioning of the cStor volume.
- Fixes an issue with cStor snapshot taken during transition of replica’s rebuild status.
- Fixes an issue where application file system was breaking due to the deletion of Jiva auto-generated snapshots.
- Fixes an issue where NDM pod was getting restarted while probing for details from the devices that had write cache supported.
- Fixes an issue in NDM where Seachest probe was holding open file descriptors to LVM devices and LVM devices were unable to detach from the Node due to NDM hold on device.
- Fixes a bug where backup was failing where openebs operator was installed through helm. velero-plugin was checking maya-apiserver name and it was different when you have installed via helm based method. Updated velero-plugin to check label of maya-apiserver service name.

1.0.0

Release Notes
Release Blog
Upgrade Steps
- Introduced a cluster level component called NDM operator to manages the access to block devices, selecting & binding BD to BDC, cleaning up the data from the released BD.
- Support for using Block Devices for OpenEBS Local PV.
- Enhanced cStor Data Engine to allow interoperability of cStor Replicas across different versions.
- Enhanced the cStor Data Engine containers to contain troubleshooting utilities.
- Enhanced the metrics exported by cStor Pools to include details of the provisioning errors.
- Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail.
- Fixes an issue where cStor volume used space was showing a very low value than actually used.
- Fixes an issue where Jiva replicas failed to register with its target if there was an error during initial registration.
- Fixes an issue where NDM would create a partitioned OS device as a block device.
- Fixes an issue where Jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion.
- Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs.
- Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial.
- Custom resource (Disk) used in earlier releases has been changed to Block Device.

0.9.0

Release Notes
Enhanced the cStor Data Engine containers to contain troubleshooting utilities.-Enhanced cStor Data Engine to allow interoperability of cStor Replicas across different versions. -Support for using Block Devices for OpenEBS Local PV.
- Support for Dynamic Provisioning of Local PV
- Enhanced the cStor Volumes to support Backup/Restore to S3 compatible storage using the incremental snapshots supported by cStor Volumes.
- Enhanced the cStor Volume Replica to support an anti-affinity feature that works across PVs.
- Enhanced the cStor Volume to support scheduling the cStor Volume Targets along side the application pods that interacts with the cStor Volume.
- Enhances the Jiva Volume provisioning to provide an option called DeployInOpenEBSNamespace.
- Enhanced the cStor Volume Provisioning to be customized for varying workload or platform type during the volume provisioning.
- Enhanced the cStor Pools to export usage statistics as prometheus metrics.
- Enhanced the Jiva Volume replica rebuild process by eliminating the need to do a rebuild if the Replica already has all the required data to serve the IO.
- Enhanced the Jiva Volume - replica provisioning to pin the Replica’s to the nodes where they are initially scheduled using Kubernetes nodeAffinity.
- Fixes an issue where NDM pods failed to start on nodes with selinux=on.
- Fixes an issue where cStor Volume with single replicas were shown to be in Degraded, rebuilding state.
- Fixes an issue where user was able to delete a PVC, even if there were clones created from it, resulting in data loss for the cloned volumes.
- Fixes an issue where user was able to delete a PVC, even if there were clones created from it, resulting in data loss for the cloned volumes.
- Fixes an issue where cStor Volumes failed to provision if the /var/openebs/ directory was not editable by cStor pods like in the case of SuSE Platforms.
- Fixes an issue where Jiva Volume - Target can mark a replica as offline if the replica takes longer than 30s to complete the sync/unmap IO.
- Fixes an issue with Jiva volume - space reclaim thread, that was erroring out with an exception if the replica is disconnected from the target.

0.8.2
Release NotesEnhanced the metrics exported by cStor Pools to include details of the provisioning errors.-Enhanced the cStor Data Engine containers to contain troubleshooting utilities.-Enhanced cStor Data Engine to allow interoperability of cStor Replicas across different versions.
- Fixed an issue causing cStor Volume Replica CRs to be stuck, when the OpenEBS
namespace was being deleted.
- Fixed an issue where a newly added cStor Volume Replica may not be successfully
registered with the cStor target, if the cStor tries to connect to Replica before the replica
is completely initialised.
- Fixed an issue with Jiva Volumes where target can mark the Replica as Timed out on IO,
even when the Replica might actually be processing the Sync IO.
- Fixed an issue with Jiva Volumes that would not allow for Replicas to re-connect with the
Target, if the initial Registration failed to successfully process the hand-shake request.
- Fixed an issue with Jiva Volumes that would cause Target to restart when a send
diagnostic command was received from the client
- Fixed an issue causing PVC to be stuck in pending state, when there were more than
one PVCs associated with an Application Pod
- Toleration policy support for cStorStoragePool.
0.8.1Release Blog

Release Notes
Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail.-Enhanced the metrics exported by cStor Pools to include details of the provisioning errors.-Enhanced the cStor Data Engine containers to contain troubleshooting utilities.
- Ephemeral Disk Support
- Enhanced the placement of cStor volume replica in a distriubuted randomnly between the available pools.
- Enhanced the NDM to fetch additional details about the underlying disks via SeaChest.
- Enhanced the NDM to add additional information to the DiskCRs like if the disks is partitioned or has a filesystem on it.
- Enhanced the OpenEBS CRDs to include custom columns to be displayed using kubectl get output of the CR. This feature requires K8s 1.11 or higher.
- Fixed an issue where cStor volume causes timeout for iSCSI discovery command and can potentially trigger a K8s vulnerability that can bring down a node with high RAM usage.
0.8.0Release Blog

Release Notes
Fixes an issue where cStor volume used space was showing a very low value than actually used.-Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail.-Enhanced the metrics exported by cStor Pools to include details of the provisioning errors.
- cStor Snapshot & Clone
- cStor volume & Pool runtime status
- Target Affinity for both Jiva & cStor
- Target namespace for cStor
- Enhance the volume metrics exporter
- Enhance Jiva to clear up internal snapshot taken during Replica rebuild
- Enhance Jiva to support sync and unmap IOs
- Enhance cStor for recreating pool by automatically selecting the disks.
0.7.2Release NotesFixes an issue where jiva replicas failed to register with its target if there was an error during initial registration.-Fixes an issue where cStor volume used space was showing a very low value than actually used.-Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail.
- Support for clearing sapce used by Jiva replica after the volume is deleted using Cron Job.
- Support for a storage policy that can disable the Jiva Volume Space reclaim.
- Support Target Affinity fort Jiva target Pod on the same node as the Application Pod.
- Enahanced Jiva related to internal snapshots for rebuilding Jiva.
- Enhanced exporting cStor volume metrics to prometheus
0.7.0Release Blog

Release Notes
Fixes an issue where NDM would create a partitioned OS device as a block device.-Fixes an issue where jiva replicas failed to register with its target if there was an error during initial registration.-Fixes an issue where cStor volume used space was showing a very low value than actually used.
- Enhanced NDM to discover block devices attached to Nodes .
- Alpha support for cStor Engine
- Naming convention of Jiva Storage pool as ‘default’ and StorageClass as ‘openebs-jiva-default’
- Naming convention of cStor Storage pool as ‘cstor-sparse-pool’ and StorageClass as ‘openebs-cstor-sparse’
- Support for specifying replica count,CPU/Memory Limits per PV,Choice of Storage Engine, Nodes on which data copies should be copied.
0.6.0Release Blog

Release Notes
Fixes an issue where jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion.-Fixes an issue where NDM would create a partitioned OS device as a block device.-Fixes an issue where jiva replicas failed to register with its target if there was an error during initial registration.
- Integrate the Volume Snapshot capabilities with Kubernetes Snapshot controller.
- Enhance maya-apiserver to use CAS Templates for orchestrating new Storage Engines.
- Enhance mayactl to show details about replica and Node details where replicas are running.
- Enhance maya-apiserver to schedule Replica Pods on specific nodes using nodeSelector.
- Enhance e2e tests to simulate chaos at different layers such as - CPU, RAM, Disk, Network, and Node.
- Support for deploying OpenEBS via Kubernetes stable Helm Charts.
- Enhanced Jiva volume to handle more read only volume scenarios
0.5.4Release NotesFixes an issue where Velero Backup/Restore was not working with hostpath Local PVs.-Fixes an issue where jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion.-Fixes an issue where NDM would create a partitioned OS device as a block device.
- Provision to specify filesystems other than ext4 (default).
- Support for XFS filesystem format for mongodb StatefulSet using OpenEBS Persistent Volume.
- Increased integration test & e2e coverage in the CI
- OpenEBS is now available as a stable chart from Kubernetes
0.5.3Release NotesUpgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial.-Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs.-Fixes an issue where jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion.
- Fixed usage of StoragePool issue when RBAC settings are applied
- Enhanced memory consumption usage for Jiva Volume
0.5.2Release NotesChanged the custom resource (Disk) used in earlier releases has been changed to Block Device.-Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial.-Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs.
- Support to set non-SSL Kubernetes endpoints to use by specifying the ENV variables on maya-apiserver and openebs-provisioner.
0.5.1Release Notes-Changed the custom resource (Disk) used in earlier releases has been changed to Block Device.-Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial.
- Support to use Jiva volume from CentOS iSCSI Initiator
- Support openebs-k8s-provisioner to be launched in non-default namespace
0.5.0Release Blog

Release Notes
-Changed the custom resource (Disk) used in earlier releases has been changed to Block Device.
- Enhanced Storage Policy Enforcement Framework for Jiva.
- Extend OpenEBS API Server to expose volume snapshot API.
- Support for deploying OpenEBS via helm charts.
- Sample Prometheus configuration for collecting OpenEBS Volume Metrics.
- Sample Grafana OpenEBS Volume Dashboard - using the prometheus Metrics
0.4.0Release Blog

Release Notes
- Enhanced MAYA cli support for managing snapshots,usage statistics.
- Support OpenEBS Maya API Server uses the Kubernetes scheduler logic to place OpenEBS Volume Replicas on different nodes
- Support Extended deployment of OpenEBS in AWS.
- Support OpenEBS can be deployed in a minikube setup.
- Enhanced openebs-k8s-provisioner from crashloopbackoff state
0.3.0Release Blog

Release Notes
- Support OpenEBS hyper-converged with Kubernetes Minion Nodes.
- Enable OpenEBS via the openebs-operator.yaml
- Supports creation of OpenEBS volumes using Dynamic Provisioner.
- Storage functionality and Orchestration/Management functionality is delivered as container images on DockerHub.
0.2.0Release Blog

Release Notes
- Integrated OpenEBS FlexVolume Driver and Dynamically Provision OpenEBS Volumes into Kubernetes.
- Support Maya api server to provides new AWS EBS-like API for provisioning Block Storage.
- Enhanced Maya api server to Hyper Converged with Nomad Scheduler.
- Backup/Restore Data from Amazon S3.
- Node Failure Resiliency Fixes

See Also:

cStor Roadmap

OpenEBS FAQ

Container Attached Storage or CAS