Uploaded image for project: 'JBoss Enterprise Application Platform 4 and 5'
  1. JBoss Enterprise Application Platform 4 and 5
  2. JBPAPP-7839

Crashing one node in cluster stops client which is connected to another node of cluster

    XMLWordPrintable

Details

    • Bug
    • Resolution: Won't Do
    • Major
    • EAP_EWP 5.1.2
    • EAP_EWP 5.1.2 CR3
    • HornetQ
    • None
    • NEW

    Description

      Hi,
      we have found suspect behavior of HornetQ cluster. We have the following scenario with default production profile (BLOCK policy). We have 3 servers (A, B, C), there is queue and simple MDB which reads messages from queue and writes information into standard output on each server. After startup HQs form cluster. Client starts to send messages into queue and we can see information from MDB on each server, that is correct behavior. If we shutdown server (clean shutdown, Ctrl+c) server B is stopped and servers A and B continues with its work, that is correct behavior.
      Problem is when we kill (-9) server B. MDBs on A and C are stopped and client is also stopped (because of full queue - BLOCK policy). MDBs on A and C should not be stopped. We think that this is not correct behavior. Because you will loose whole cluster in case of crashing one node. If I recall correctly, this configuration is used by our customers. It is not scenario with HA but we think that current behavior is not correct.

      Server A (MDB) <- Client, sends messages into queue

      Server B (MDB)

      Server C (MDB)

      1. Start servers
      2. Start client
      3. Kill server B with kill -9
      4. All MDBs are stopped (but should not be)
      5. If server B is restarted, A and B will continue in work

      Attachments

        Activity

          People

            csuconic@redhat.com Clebert Suconic
            pslavice@redhat.com Pavel Slavicek
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: