Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-2949

BZ#1889294 storwize: Unable to use DRP(data reduction pool) functionality, when creating volumes using this pool

XMLWordPrintable

    • False
    • False
    • Committed
    • Committed
    • Committed
    • No
    • Undefined
    • Storage; Cinder

      Description of problem:
      Unable to make use of DRP(data reduction pool) feature, when creating volumes using this pool

      Version-Release number of selected component (if applicable):
      RHOSP 13
      IBM Storwize 8.1.3.6

      How reproducible:
      Always

      Steps to Reproduce:
      cinder:

      [premium]
      image_volume_cache_enabled=True
      image_volume_cache_max_count=50
      image_volume_cache_max_size_gb=1000
      san_ip=172.16.30.50
      san_login=admin
      san_private_key=/etc/cinder/ibm.key
      storwize_svc_flashcopy_rate=80
      storwize_svc_volpool_name=DRP
      storwize_svc_vol_warning = 0
      volume_backend_name=premium
      volume_driver=cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
      backend_host=hostgroup

      Actual results:

      2020-10-19 12:12:16.973 38 ERROR oslo_messaging.rpc.server command: ['svctask', 'mkvdisk', '-name', u'"volume-310a0676-9d79-43eb-a148-b3da7ebe0a36"', '-mdiskgrp', u'"DRP"', '-iogrp', u'0', '-size', '20', '-unit', 'gb', '-rsize', '2%', '-autoexpand', '-warning', '0%', '-grainsize', '256', '-easytier', 'on']
      2020-10-19 12:12:16.973 38 ERROR oslo_messaging.rpc.server stdout:
      2020-10-19 12:12:16.973 38 ERROR oslo_messaging.rpc.server stderr: CMMVC9236E The pool specified is a data reduction pool. Volumes or volume copies which are thin provisioned and created from a data reduction pool can not use the -warning parameter.

      Expected results:

      Created Successfully

      Additional info:

      Customer has three backends from the same SAN, with two different pool configuration,

      [ibm]
      image_volume_cache_enabled=True
      image_volume_cache_max_count=50
      image_volume_cache_max_size_gb=1000
      san_ip=172.16.30.50
      san_login=admin
      san_private_key=/etc/cinder/ibm.key
      storwize_svc_flashcopy_rate=80
      storwize_svc_volpool_name=Pool0
      volume_backend_name=ibm
      volume_driver=cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
      backend_host=hostgroup
      rpc_response_timeout = 1800

      [slow]
      san_ip=172.16.30.50
      san_login=admin
      san_private_key=/etc/cinder/ibm.key
      storwize_svc_volpool_name=Standard
      storwize_svc_vol_compression = False
      storwize_svc_vol_easytier = False
      volume_backend_name=slow
      volume_driver=cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
      backend_host=hostgroup

      [premium]
      image_volume_cache_enabled=True
      image_volume_cache_max_count=50
      image_volume_cache_max_size_gb=1000
      san_ip=172.16.30.50
      san_login=admin
      san_private_key=/etc/cinder/ibm.key
      storwize_svc_flashcopy_rate=80
      storwize_svc_volpool_name=DRP
      storwize_svc_vol_warning = 0
      volume_backend_name=premium
      volume_driver=cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
      backend_host=hostgroup

      (overcloud) [stack@aev-20001 ~]$ openstack volume type show 7149b54e-a8aa-466e-aefd-b0fdcd061adb
      --------------------------------------------------------+

      Field Value

      --------------------------------------------------------+

      access_project_ids None
      description None
      id 7149b54e-a8aa-466e-aefd-b0fdcd061adb
      is_public True
      name ibm
      properties volume_backend_name='ibm'
      qos_specs_id None

      --------------------------------------------------------+
      (overcloud) [stack@aev-20001 ~]$ openstack volume type show e2997709-e151-412f-bac1-4f3172b819e2
      --------------------------------------------------------+

      Field Value

      --------------------------------------------------------+

      access_project_ids None
      description None
      id e2997709-e151-412f-bac1-4f3172b819e2
      is_public True
      name premium
      properties volume_backend_name='premium'
      qos_specs_id None

      --------------------------------------------------------+
      (overcloud) [stack@aev-20001 ~]$ openstack volume type show 0d1e9f6f-f83c-4b1a-a979-ca39449cb21d
      --------------------------------------------------------+

      Field Value

      --------------------------------------------------------+

      access_project_ids None
      description None
      id 0d1e9f6f-f83c-4b1a-a979-ca39449cb21d
      is_public True
      name slow
      properties volume_backend_name='slow'
      qos_specs_id None

      --------------------------------------------------------+

      While creating Volume requested on SAN, Openstack passes command on the SAN, which mainly involve the arguments like,

      Below are the logs showing the SSH command being run:
      2020-10-19 13:11:47.721 39 DEBUG oslo_concurrency.processutils [req-4a2f7797-8e49-470b-8d96-bd9f641ccf71 60ecc8e7c65343b5a46a658eb7727a95 90810eab1c3e4adfb050aeb2260fc89b - - -] Running cmd (SSH): svctask mkvdisk -name "volume-cda2e330-cdd3-4c4a-821a-018c1e780f6e" -mdiskgrp "Standard" -iogrp 0 -size 20 -unit gb -rsize 2% -autoexpand -warning 0% -grainsize 256 -easytier off ssh_execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:525

      But while creating from the premium Volume type and SAN pool DRP, following error is observed,

      2020-10-19 12:12:16.973 38 ERROR oslo_messaging.rpc.server command: ['svctask', 'mkvdisk', '-name', u'"volume-310a0676-9d79-43eb-a148-b3da7ebe0a36"', '-mdiskgrp', u'"DRP"', '-iogrp', u'0', '-size', '20', '-unit', 'gb', '-rsize', '2%', '-autoexpand', '-warning', '0%', '-grainsize', '256', '-easytier', 'on']
      2020-10-19 12:12:16.973 38 ERROR oslo_messaging.rpc.server stdout:
      2020-10-19 12:12:16.973 38 ERROR oslo_messaging.rpc.server stderr: CMMVC9236E The pool specified is a data reduction pool. Volumes or volume copies which are thin provisioned and created from a data reduction pool can not use the -warning parameter.

      This is a standard IBM error,

      Now we asked the CU if a manual operation can be passed while removing the (-warning) flag

      Removing the -warning argument removed the error:
      IBM_Storwize:Cluster_172.16.30.5:superuser>mkvdisk -name "volume-test-manual-cli" -mdiskgrp "DRP" -iogrp 0 -size 20 -unit gb -rsize 2% -autoexpand -compressed -easytier off
      CMMVC9247E The pool specified is a data reduction pool. Thin or compressed volumes or volume copies in data reduction pools cannot have the easytier status set or changed.

      However it seems we also cannot specify the "easytier" argument. So removing that also then succeeds in the volume being created:

      IBM_Storwize:Cluster_172.16.30.5:superuser>mkvdisk -name "volume-test-manual-cli" -mdiskgrp "DRP" -iogrp 0 -size 20 -unit gb -rsize 2% -autoexpand -compressed
      Virtual Disk, id [455], successfully created
      IBM_Storwize:Cluster_172.16.30.5:superuser>

      Is there any way that (-warning) and (-easytier) flag can be explicitly removed from the Openstack request?

      I have found the upstream document which basically overwrites the parameters specified on the configuration,
      https://docs.openstack.org/mitaka/config-reference/block-storage/drivers/ibm-storwize-svc-driver.html

      Also asked the CU to change the volume type properties -

      openstack volume type set --property capabilities:warning='<is> False' --property drivers:warning=0 --property volume_backend_name=premium premium

      But the same condition is observed, when passing command to the SAN as a standard (-warning) and (-easytier) are always being passed.

      Looking for a way to bypass these flags/condition.

            cinder-bugs@redhat.com cinder-bugs@redhat.com
            jira-bugzilla-migration RH Bugzilla Integration
            rhos-dfg-storage-squad-cinder
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: