Details
-
Bug
-
Resolution: Done
-
Major
-
JBossAS-3.2.6 Final
-
None
Description
SourceForge Submitter: mdaleiden .
I have two machines clustered using JBoss 3.2.2RC1.
One is Windoze 2000 and one is HP/UX. Clustering and
farming are working fine (nodes see each other, the
partition manager reports the existence of both
members, and modules are correctly farmed between the
two nodes).
My problem is that the HA-JNDI does not appear to be
working as expected. The following code creates the
initial context for the HA-JNDI:
Properties jndiProps = new Properties();
jndiProps.put
(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces
.NamingContextFactory");
jndiProps.put
(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.in
terfaces");
jndiProps.put(Context.PROVIDER_URL, "");
jndiProps.put("jnp.partitionName", "DefaultPartition");
Then, I have code that binds new objects into the HA-
JNDI tree during server startup (custom MBeans perform
this binding operation on startup), using the node name
as part of the name for the object:
String hostName = InetAddress.getLocalHost
().getHostName();
String key = "jmx/device/" + hostName + "/xxxxxx";
Util.rebind(jndiContext, key, obj);
No exceptions or errors are reported by this code when
binding objects to the HA-JNDI tree from both nodes, so
I surmised that the objects were successfully bound.
In my application, I then use the following code to list
the contents of the HA-JNDI tree (using the same
context as described above):
for (NamingEnumeration enumObj = jndiContext.list
("jmx/device"); enumObj.hasMore() {
NameClassPair ncHandle = (NameClassPair)enumObj.next
();
System.out.println("Found context: 'jmx/device/' +
ncHandle.getName() + "'...");
}
Depending on which server the app is run from (winnode
or hpnode), the above code shows only the bindings for
that server. For example, if I run the app from winnode,
the output shows as follows:
"Found context: 'jmx/device/winnode'..."
If run from hpnode, it shows:
"Found context: 'jmx/device/hpnode'..."
It should show the following, since both nodes are up
and running and objects have been registered on both
nodes using the HA-JNDI context:
"Found context: 'jmx/device/winnode'..."
"Found context: 'jmx/device/hpnode'..."
So, I'm a bit confused as to why the objects bound into
HA-JNDI by one server are not being seen by the other
server when the HA-JNDI tree is queried.
Well, I found part of the problem. It appears that you
must be careful with JNDI context names. The local JNDI
already contains a top-level context named jmx, which
confuses the HA-JNDI logic. The logic first scans the HA-
JNDI for the context and, when it does not initially find
it, it performs a lookupLocally, which finds the local JNDI
context. From that point on, it binds the remaining
subcontexts to the local JNDI context, instead of the HA-
JNDI context.
When I changed the naming of the HA-JNDI context so
that it uses a unique top-level context, HA-JNDI appears
to create the appropriate subcontexts in the HA-JNDI
tree, as well as bind the object to the lowest-level
subcontext. However, the list function still only shows
the objects that are bound from the local server.
I turned up the logging for the cluster components and
observed that the correct calls (HAJNDI._rebind()) were
being propagated across the cluster to the various HA-
JNDI instances in order to replicate the bindings, but as I
noted, the bindings do not seem to be replicated at all.
I further tried to reduce the number of variables by
clustering two Windows machines that are on the same
subnet (to eliminate any potential multicast propagation
and firewall issues), but received the exact same result.