Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-3888

BZ#1854817 Access rule is not successfully applies to share when share service is down

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • openstack-manila
    • False
    • False
    • Not Set
    • Not Set
    • Not Set
    • No
    • Moderate
    • Storage; Manila

      Description of problem:
      When applying access rule to share while manila share service is down.
      As expected the pacemaker restart manila share service, however, the access stuck
      on "queued_to_apply" status rather than moving to "active" status

      Version-Release number of selected component (if applicable):
      python3-manilaclient-1.29.0-0.20200310223441.1b2cafb.el8ost.noarch
      puppet-manila-15.4.1-0.20200403160104.e41b1b6.el8ost.noarch

      How reproducible:
      100%

      Steps to Reproduce:
      1. Create a Manila share
      2. Disable the Manila share service
      3. Issue an access_allow API request granting access to the share
      4. Wait for the service to be up
      5. Verify that the access rule was successfully applied

      (.venv) (overcloud) [stack@undercloud-0 tempest]$ manila create nfs 1
      ---------------------------------------------------------------------------+

      Property Value

      ---------------------------------------------------------------------------+

      status creating
      share_type_name default
      description None
      availability_zone None
      share_network_id None
      share_server_id None
      share_group_id None
      host  
      revert_to_snapshot_support False
      access_rules_status active
      snapshot_id None
      create_share_from_snapshot_support False
      is_public False
      task_state None
      snapshot_support False
      id 3bcd1d57-b292-4ea7-a94d-9606cc6bbf57
      size 1
      source_share_group_snapshot_member_id None
      user_id 3baacabfe24d4998acfc47343305eb8b
      name None
      share_type 7ee19d94-d3ca-4f42-bd5a-870eff5b36d8
      has_replicas False
      replication_type None
      created_at 2020-07-07T04:32:43.000000
      share_proto NFS
      mount_snapshot_support False
      project_id 8de8911d8b644843bff7e5661706e7bc
      metadata {}

      ---------------------------------------------------------------------------+

      (.venv) (overcloud) [stack@undercloud-0 tempest]$ manila list
      ----------------------------------------------------------------------------------------------------------------------------------

      ID Name Size Share Proto Status Is Public Share Type Name Host Availability Zone

      ----------------------------------------------------------------------------------------------------------------------------------

      3bcd1d57-b292-4ea7-a94d-9606cc6bbf57 None 1 NFS available False default hostgroup@cephfs#cephfs nova

      ----------------------------------------------------------------------------------------------------------------------------------

      [root@controller-0 ~]# docker stop openstack-manila-share-docker-0
      openstack-manila-share-docker-0

      [root@controller-0 ~]# docker ps | grep openstack-manila-share-docker-0

      (.venv) (overcloud) [stack@undercloud-0 tempest]$ manila access-allow 3bcd1d57-b292-4ea7-a94d-9606cc6bbf57 ip 0.0.0.0
      --------------------------------------------------+

      Property Value

      --------------------------------------------------+

      access_key None
      share_id 3bcd1d57-b292-4ea7-a94d-9606cc6bbf57
      created_at 2020-07-07T04:35:07.000000
      updated_at None
      access_type ip
      access_to 0.0.0.0
      access_level rw
      state queued_to_apply
      id 18e3d3ed-ea67-44dc-a81f-2dee24aea95e

      --------------------------------------------------+

      (.venv) (overcloud) [stack@undercloud-0 tempest]$ manila access-list 3bcd1d57-b292-4ea7-a94d-9606cc6bbf57
      -----------------------------------------------------------------------------------------------------------------------------------+

      id access_type access_to access_level state access_key created_at updated_at

      -----------------------------------------------------------------------------------------------------------------------------------+

      18e3d3ed-ea67-44dc-a81f-2dee24aea95e ip 0.0.0.0 rw queued_to_apply None 2020-07-07T04:35:07.000000 None

      -----------------------------------------------------------------------------------------------------------------------------------+

      [root@controller-0 ~]# docker ps | grep openstack-manila-share-docker-0
      bd3610718ddf 192.168.24.1:8787/rh-osbs/rhosp13-openstack-manila-share:pcmklatest "dumb-init --singl..." 37 seconds ago Up 37 seconds openstack-manila-share-docker-0

      (.venv) (overcloud) [stack@undercloud-0 tempest]$ manila access-list 3bcd1d57-b292-4ea7-a94d-9606cc6bbf57
      -----------------------------------------------------------------------------------------------------------------------------------+

      id access_type access_to access_level state access_key created_at updated_at

      -----------------------------------------------------------------------------------------------------------------------------------+

      18e3d3ed-ea67-44dc-a81f-2dee24aea95e ip 0.0.0.0 rw queued_to_apply None 2020-07-07T04:35:07.000000 None

      -----------------------------------------------------------------------------------------------------------------------------------+

      Actual results:
      Access rule stuck on "queued_to_apply" status.

      Expected results:
      Access rule should be applied to the share.
      Access rule status should move from "queued_to_apply" status to "active" status.

            jira-bugzilla-migration RH Bugzilla Integration
            jira-bugzilla-migration RH Bugzilla Integration
            rhos-dfg-storage-squad-manila
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated: