ViPR and OpenStack Integration: How It Works

This blog provides a brief overview of ViPR and OpenStack and then talks about they are integrated along with what features are supported. Additionally, I will explain the setup required from the ViPR side and the steps required for configuring the integration on the OpenStack compute host/cinder volume node.

ViPR-and-ObjectWhat is ViPR?
ViPR is a true Software-defined storage platform from EMC that for the first time revolutionizes the management of multi-vendor storage environments into a single centralized management console. In its first release, ViPR 1.0 supports EMC VNX, EMC VMAX, EMC Isilon, EMC VPlex and Netapp, a third-party arrays. In forthcoming releases it will add support for other 3rd party platforms as well as commodity arrays – support for Hitachi Data Services (HDS) arrays is being planned for ViPR 2.0. The unique value proposition of ViPR is that it decouples the storage data path from the control path. This means ViPR sits on the control path without interfering in data I/O; hence the underlying array features can be utilized without performance degradation. It is a solution built to enable the Software-Defined Data Center (SDDC), by creating the software-defined storage abstraction layer.

What is OpenStack?
OpenStack is a cloud operating system that combines three software-defined abstraction layers: one for compute resources, one for networking resources and the third one is for storage resources. It does this by defining larger pools of compute, networking and storage resources throughout the data center. OpenStack manages these efforts as a set of projects, Nova for compute, Neutron for networking, Swift for object storage, Cinder for block storage. And, all of these are controlled and managed through centralized console called a dashboard or horizon. As the name suggests, OpenStack is a stack of compute, network and storage services which are truly open to be consumed by any vendor who wishes to plug-in their solution in any area they would like.

ViPR and OpenStack Integration
ViPR integrates with OpenStack through a Cinder plugin driver from the Havana release, by enabling the block storage provisioning for OpenStack users. This empowers OpenStack users to consume the block storage capabilities of all the heterogeneous storage managed by ViPR.

OpenStack Cinder drivers supported

1. iSCSI support ( iSCSI driver )
2. Fibre Channel support ( FC Driver )

Supported Driver Features
As of ViPR 1.0, the following features are supported:

1. Create volume
2. Delete volume
3. Attach volume
4. Detach volume
5. Create snapshot
6. Delete snapshot
7. Get Volume Stats
8. Copy image to volume
9. Copy volume to image
10. Clone volume

The OpenStack Havana release mandates all of the above features plus ‘create volume from snapshot’ as the minimum set of features to be supported.

Setup requirements from ViPR
OpenStack integration has been tested with the ViPR 1.0 release, so ViPR 1.0 licensed instance with the following setup build is required. Please refer to the ViPR installation guide to bring up the ViPR instance.

1. Tenant user, with tenant administrator or project administrator privileges.
2. At least one physical storage system registered.
3. One storage virtual array. Let’s call it “Demo_Varray”.
4. One virtual storage pool. Let’s call it “Demo_Vpool”. While creating virtual storage pool, protocol type should be specified based on whether you are creating it for iSCSI driver or FC driver.
5. One virtual network to provide the connectivity between the storage array and host. Let’s call it “Demo_Network”. After the network is created, add all accessible ports of storage to this network. Port addition needs to be done only for iSCSI driver, not required for FC.
6. For FC driver, ensure that the OpenStack host is attached to a VSAN discovered by ViPR.
7. A project in which all resources to be created, this would be associated with the admin project of the Openstack. Let’s call the project as “Demo_Project.”

Installation and Configuration steps for OpenStack Cinder driver
The following steps need to be performed after the OpenStack operating system is brought up. Please refer to www.openstack.org for installation instructions of the stack.

1. EMC ViPR Cinder driver effort has been open sourced through github. First step would be to download the EMC ViPR Cinder driver from the locationhttps://github.com/emcvipr/controller-openstack-cinder/releases/tag/v1.0
2. Untar/unzip the downloaded file and Copy the vipr subdirectory to the /opt/stack/cinder/cinder/volume/drivers/emc directory of OpenStack node(s) where cinder-volume is running. This is directory is where all Cinder drivers are located.
3. Modify the /etc/cinder/cinder.conf file
a. Add the following entries in the [DEFAULT] section of the file if you are planning to use either of iSCSI or FC driver.
i. For iSCSI driver
volume_driver=cinder.volume.drivers.emc.vipr.emc_vipr_iscsi.EMCViPRISCSIDriver
vipr_hostname=<ViPR Host Name>
vipr_port=4443
vipr_username=root
vipr_password=ChangeMe
vipr_tenant=Provider Tenant
vipr_project=Demo_Project
vipr_varray=Demo_Varray
ii. For FC Driver
volume_driver=cinder.volume.drivers.emc.vipr.emc_vipr_fc.EMCViPRISCSIDriver
vipr_hostname=<ViPR Host Name>
vipr_port=4443
vipr_username=root
vipr_password=ChangeMe
vipr_tenant=Provider Tenant
vipr_project=Demo_Project
vipr_varray=Demo_Varray
b. Add/modify the following entries if you are planning to use multi backend drivers.
i. The “enabled_backends” parameter needs to be set in cinder.conf and other parameters required in each backend need to be placed in individual backend sections (rather than the DEFAULT section).
ii. “enabled_backends” will be commented by default, please uncomment and add the multiple backend names as below.
enabled_backends=viprdriver-iscsi,viprdriver-fc
iii. Add the following at the end of the file; please note that each section is named as in b).ii.
[viprdriver-iscsi]
volume_driver=cinder.volume.drivers.emc.vipr.emc_vipr_iscsi.EMCViPRISCSIDriver
volume_backend_name=EMCViPRISCSIDriver
vipr_hostname=<ViPR Host Name>
vipr_port=4443
vipr_username=root
vipr_password=ChangeMe
vipr_tenant=Provider Tenant
vipr_project=Demo_Project
vipr_varray=Demo_Varray_iSCSI
[viprdriver-fc]
volume_driver=cinder.volume.drivers.emc.vipr.emc_vipr_fc.EMCViPRFCDriver
volume_backend_name=EMCViPRFCDriver
vipr_hostname=<ViPR Host Name>
vipr_port=4443
vipr_username=root
vipr_password=ChangeMe
vipr_tenant=Provider Tenant
vipr_project=Demo_Project

vipr_varray=Demo_varray_fc

4. Restart the Cinder volume service after above changes to cinder.conf file. Follow the steps below to restart the Cinder-volume service.
a. Attach to a screen by running “screen –r.”
b. If the screen doesn’t show up run “script /dev/null” and then do step (a.
c. After doing step (a, all running screens will be listed. Use Ctrl+A, “screen-num” ( ensure to use double quotes ).
d. The screen will pops out options to select one of the running screen, select 17$c-vol and “Enter.”
e. Now do the Ctrl+C, the cinder volume service will be stopped.

f. Use the up arrow key to know cinder-volume command and use it to restart the service.

5. Create ‘volume type/s’ on the openstack, either using horizon dashboard or cinder command line client. If you are using UI, login to dashboard and select the admin tab from the main UI. Go to ‘Volumes’ section and create the volume type. Otherwise, use the cinder command to create it.
cinder –os-username admin –os-password password –os-tenant-name admin –os-auth-url=http://lglw9119.lss.emc.com:35357/v2.0 type-create vipr-volume-type

6. Map the volume type created with the ViPR virtual storage pool.

a. If you have configured the single driver, then issue the following command.
cinder –os-username admin –os-password password –os-tenant-name admin –os-authurl=http://lglw9119.lss.emc.com:35357/v2.0 type-key vipr-volume-type set ViPR:VPOOL=Demo_Vpool
b. If you have configured the multiple backend drivers, then while setting up the volume types, also set up the volumetype-to-backend association, in addition to the volumetype-to-vpool mapping. Below is an example:
cinder –os-username admin –os-tenant-name admin type-create “ViPR High Performance”
cinder –os-username admin –os-tenant-name admin type-key “ViPR High Performance” set ViPR:VPOOL=”High Performance”
cinder –os-username admin –os-tenant-name admin type-key “ViPR High Performance” set volume_backend_name=EMCViPRISCSIDriver
cinder –os-username admin –os-tenant-name admin type-create “ViPR High Performance FC”
cinder –os-username admin –os-tenant-name admin type-key “ViPR High Performance FC” set ViPR:VPOOL=”High Performance FC”
cinder –os-username admin –os-tenant-name admin type-key “ViPR High Performance FC” set volume_backend_name=EMCViPRFCDriver
cinder –os-username admin –os-tenant-name admin extra-specs-list

With this, you are all set to perform storage operations from OpenStack’s horizon UI or Cinder client. You may want to create volumes, snapshots and attach these volumes to OpenStack VM instances. To attach volumes to the OpenStack VM instances, the OpenStack compute host must be added/registered into ViPR. You can add the host using ViPR UI console or by running the following CLI command from the cli folder present in the cinder driver location i.e /opt/stack/cinder/cinder/volume/drivers/emc/vipr/cli

• Change directory to /opt/stack/cinder/cinder/volume/drivers/emc/vipr/cli.
• You may have to authenticate if required; python viprcli.py authenticate –u <ViPR user> -d /usr/cookiedir.
• python viprcli.py openstack add_host ( to add a devcon.emc.com ).
• python viprcli.py openstack add_host -name <hostname> -wwpn <initiator port> ( to add specific host/initiator ).
• python viprcli.py openstack add_host -network <network name> -varray <varray> (to add devcon.emc.com to specific network in varray).

If you have any questions about any of the above steps, please join the conversation happening in the ViPR Community.

About the Author: Parashurham Hallur