Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-33799

SR-IOV: Pings Failing Between Pods Due to Duplicate MAC Addresses on VFs

XMLWordPrintable

    • Critical
    • True
    • Hide

      None

      Show
      None
    • Release Note Not Required
    • Release Note Not Required
    • In Progress
    • 5/20 - u/s fix being discussed in the upstream community. Bug occurs only in 4.16 and needs to be fixed prior to GA color: RED

      Description of problem:
       2 pods, each with a VF from the same PF. However, pings between them are failing.
      several VFs have the same MAC address.

      Version-Release number of selected component (if applicable):

      4.16.0-rc.1

      How reproducible:

      100%

      Steps to Reproduce:
      1. Create  SR-IOV configuration to create 2 VFs on the same PF
      2. Create  2 pods with VFs
      3. Run ping between them
      Actual results:

      no ping

      Expected results:

      ping works

      Additional info:
      6: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 50:7c:6f:4a:fd:aa brd ff:ff:ff:ff:ff:ff vf 0 link/ether 20:04:0f:f1:88:01 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 20:04:0f:f1:88:03 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether 20:04:0f:f1:88:03 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 20:04:0f:f1:88:01 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
       
       
      7: ens5f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 50:7c:6f:4a:fd:ab brd ff:ff:ff:ff:ff:ff vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 20:04:0f:f1:88:01 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
       

      pod1:
      579: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 20:04:0f:f1:88:03 brd ff:ff:ff:ff:ff:ff altname enp134s0f0v1 inet 192.168.100.1/24 brd 192.168.100.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::2204:fff:fef1:8803/64 scope link valid_lft forever preferred_lft forever
       
      pod2:
      554: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 20:04:0f:f1:88:01 brd ff:ff:ff:ff:ff:ff altname enp134s0f0v0 inet 192.168.100.2/24 brd 192.168.100.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::2204:fff:fef1:8801/64 scope link valid_lft forever preferred_lft forever
       
      ping:
      sh-4.4$ ping 192.168.100.1 PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data. ^C — 192.168.100.1 ping statistics — 3 packets transmitted, 0 received, 100% packet loss, time 2048ms
       

            sscheink@redhat.com Sebastian Scheinkman
            rhn-cnf-elevin Evgeny Levin
            Evgeny Levin Evgeny Levin
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated: