Raffi Basmajian

Subscribe to Raffi Basmajian: eMailAlertsEmail Alerts
Get Raffi Basmajian: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Java EE Journal

J2EE Journal: Article

Java Messaging Services Clustering Part 2

JMS clustered architectures, colloaction, and distributed load balancing

To avoid this scenario and create a more efficient architecture, we deploy a new connection factory (CF2) with Server Affinity enabled and target the resource to those server instances hosting member destinations for DQ, namely, WebLogic Server 1 and 3. Additionally, to prevent external JMS clients from connecting to CF1, we disclose the availability of CF2 as the only connection factory accessible to external JMS clients. Thus, when an external message producer connects with CF2 to access DQ, a queue member destination is available as a collocated resource. Although this approach routes all message traffic from a message producer to a local queue destination, keep in mind that the potential exists for the physical server instance hosting the destination to become overutilized if a single JMS client is producing messages while other producer clients are not.

Also note that external JMS clients may conveniently use the cluster address and JNDI name of CF2 to locate the connection factory without consideration as to whether the connection factory is targeted to the cluster or to individual server instances.

Distributed Load Balancing
The JMS cluster in Figure 3 illustrates how to achieve the full capabilities of distributed load balancing while providing a level of high availability. The connection factories CF1 and CF2 are both targeted to the cluster, but in contrast to the previous architectures, Server Affinity is disabled. In this case, the load balancer for the distributed destination DQ will first favor a queue member with an active consumer and persistent store for any messages sent from the message producer clients, even if the destination is located on a remote server. In Figure 3, for example, MP1 connects to CF1 on WebLogic Server 1. Then, CF1 creates a JMS connection to the remote member destination MQ2 to receive the message. Since Server Affinity is disabled, DQ will always make a decision based on its load balancing algorithm to distribute each message sent from a message producer among its destination members. The next message sent from MP1 will most likely be directed to a different member destination of DQ.

For a newly created, message-driven bean, the connection factory CF1 will first favor a queue member destination of DQ without any current consumers. This behavior helps to ensure each member destination has at least one message consumer ready to process a new message on the queue, and this is clear in the diagram as each physical queue member is pinned with a message-driven bean listener. Again, since Server Affinity is disabled on CF1, a message consumer might be pinned to a member queue over a remote JMS connection, as is the case with MDB1 and MDB2.

Finally, in this architecture we used a JDBC store as the persistence mechanism for all JMS servers. A JDBC store has an advantage over a file store in that it facilitates recovery in a failure scenario. If, for example, a WebLogic server instance hosting a JMS server fails, the JMS server and all its destinations can be easily migrated to another WebLogic instance, provided the JDBC store connection pool is targeted on the new server instance receiving the migrated JMS server. Migrating a JMS server using a file store, however, will require the new server instance to have access to the file store, which may be achieved by using a shared disk; otherwise, you will need to copy the file store data files and transaction logs to allow WebLogic to recover any persisted messages. Note that a JDBC store may also represent a single point of failure if the database fails. For this reason it is ideal to use a fault-tolerant database, or, at a minimum, use the Test Connections on Reserve attribute on the JDBC store connection pool to reconnect with the failed database after it is available again without having to restart the WebLogic server instance.

JMS Clustering Best Practices
Here is a list of best practices to follow when designing an application that utilizes JMS clustering.

  • Collocate to reduce network overhead. Collocate JMS clients with JMS resources for better performance while reducing remote JMS connections and network traffic.
  • Use distributed load balancing by disabling Server Affinity. Disable Server Affinity on JMS connection factories to distribute message traffic to all distributed destination members.
  • Use file stores for increased performance. Use a file store to increase the performance of persistent messages.
  • Use JDBC stores for reliability. Use a JDBC store to easily migrate JMS resources from a failed WebLogic server.
  • Use message-driven beans for reliable message consumption. Message-driven beans are ideal for reliable, asynchronous message consumption. If you deploy a MDB listening on a distributed destination, WebLogic ensures there are consumers listening on each destination member. This is an ideal approach for reliability and performance, coupled with the features of load balancing and fail over provided by WebLogic.
  • Distributed topics for nondurable subscribers. For nondurable topic subscribers, ensure that all JMS servers hosting topic members for a distributed topic are configured without a JMS store; otherwise, WebLogic will treat all consumers as durable subscriptions, which can seriously degrade performance. If you can not remove the store from a JMS server because it is used by another destination, then an alternative is to set the StoreEnabled parameter to False for each individual topic member destination.
Conclusion
This article covered JMS cluster architectures and illustrated the pros and cons of collocation and distributed load balancing. Collocation reduces the need for remote connections and improves the performance of processing JMS message traffic, but may lead to disparate utilization of processing capacity among physical servers. Distributed load balancing promotes equal distribution of message processing among physical JMS destinations in a cluster, but also increases network overhead. However, just as with everything else in life, including your friends, relationships, your job, pets, and, distributed JMS systems too, it's all give and take, so the decision you make should be based on the requirements and context of the application you are designing.

That's it for this article! Please feel free to e-mail me with your comments and questions.

More Stories By Raffi Basmajian

Raffi Basmajian works as an application architect for a major financial services firm in New York City. He specializes in architecting Java/J2EE distributed systems and integration frameworks. He can be reached at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.