Raffi Basmajian

Subscribe to Raffi Basmajian: eMailAlertsEmail Alerts
Get Raffi Basmajian: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Java EE Journal

J2EE Journal: Article

JMS Clustering Part I

Queues, topics, and connection factories

This article is the first of a two-part series on JMS clustering. Part 1 will discuss the fundamental aspects of clustering JMS resources such as queues, topics, and connection factories, and illustrate the steps to go through to configure a clustered destination in a WebLogic cluster. Part 2 will discuss JMS clustering in the context of several design and configuration strategies that demonstrate how to create efficient and optimized JMS architectures.

WebLogic v7.0 introduced JMS clustering. This article will discuss the fundamental aspects of JMS clustering in WebLogic 8.1 (SP3).

Clustering JMS Resources

Scalability, high availability, and fault tolerance are required of mission-critical systems. The standards that have been memorialized in J2EE provide the framework needed to build the robust architecture that meets those requirements. JMS provides enterprise systems with a foundation for messaging. JMS systems that demand near-zero downtime, such as those in financial houses, must provide the high availability and failover for JMS resources to meet stringent service-level agreements. So it's vital to use a JMS implementation that lets JMS resources, such as queues, topics, and connection factories, be clustered and highly available while load-balancing message traffic among distributed JMS destinations.

WebLogic's JMS is a complete messaging implementation that supports JMS standards. It also provides services so that clustered JMS configurations can load-balance messages and offer high availability for all kinds of JMS resources. In a WebLogic cluster, ordinary queues and topics can be configured as distributed destinations that consist of many physical destinations dispersed among physical machines, creating a highly available messaging layer. Connection factories can also be distributed. It lets WebLogic load-balance connection requests from JMS clients and transparently directs message traffic to the desired destination, or redirects messages in the event of destination failure.

Distributed Destinations

A distributed destination represents a group of physical queues, or topics, whose members are hosted by JMS servers in a WebLogic cluster. The distributed destination is bound to a logical JNDI name in the cluster-wide JNDI tree. WebLogic will load-balance messages among the members of a distributed destination and, if a destination member fails, messages will be transparently redirected to other members in the distributed destination. From the point-of-view of a JMS client, a distributed destination appears as an ordinary destination, which means that an existing JMS application configured with ordinary destinations can change its configuration to distributed destinations without impacting JMS clients or application code.

Distributed Queues
A distributed queue represents a group of physical queues. If a QueueSender is created using the JNDI name of a distributed queue, any message sent from that QueueSender is delivered to only one physical queue destination. A decision is made every time a message is sent determining which member will get the message. This decision is based on a load-balancing algorithm provided by WebLogic. The default load-balancing scheme is a Round Robin, but you can use a Random load-balancing scheme to distribute messages to destination members at random.

When a QueueReceiver is created using a distributed queue name, it will connect to a physical queue member and, unlike a QueueSender, remain pinned to that destination until the QueueReceiver loses connection.

Figure 1 illustrates how JMS clients interact with a distributed queue in a WebLogic cluster. In the diagram, the distributed queue (DQ) has two physical queue members MQ1 and MQ2 residing on server instances 1 and 2, respectively. The queue receiver (QR) gets messages from MQ1 and will remained pinned to MQ1 until the connection is closed. The queue sender (QR) sends messages to the distributed queue DQ. Only one physical queue member gets the messages sent to a distributed queue and, as the illustration shows, the first message, Message1, sent by QS is load-balanced and sent to MQ1. The next message, Message2, is also load-balanced and is sent to the next queue member in the sequence, MQ2. The illustration uses a Round Robin load-balancing policy, which picks server instances in an ordered sequence.

When a message is sent to a distributed queue member with no consumers, the message will remain in that queue until a consumer is available, and, if the queue member fails, all messages pending in that queue will be unavailable until the queue member is available again. Configuring a Forward Delay on each member in the distributed queue can prevent this situation. This option automatically forwards messages from a queue member with no active consumers to queue members with active consumers. By default, the Forward Delay is disabled. We'll discuss how to configure this destination parameter below.

If a physical queue member fails, all consumers connected to it are notified by a JMSException and effectively lose their connection to the queue. For synchronous consumers, the exception is returned directly to the client. For asynchronous consumers whose queue session is configured with an ExceptionListener, a ConsumerClosedException is sent. In either case - assuming the failure was isolated to the physical queue member - the JMS connection and session remain valid. The client application just closes the queue receiver and recreates it using the same JMS session.

Distributed Topics
A distributed topic represents a group of physical topics. JMS client applications can create topic message producers and consumers using a distributed topic name. Your application doesn't know how many physical destination members the distributed topic has, which eliminates any special programming considerations when using this kind of clustered JMS resource.

When the TopicPublisher created with a distributed topic name publishes a message, the message is automatically sent to all members of the distributed topic so all subscribers get the message. Messages are forwarded to all topic member destinations even if a TopicPublisher is created using the JNDI name of a physical topic destination and not the distributed topic JNDI name.

Non-persistent messages published to a distributed topic are sent only to the topic destinations that are available. If your application uses persistent messages, then it's a good idea to configure each topic member destination with a persistent store. WebLogic gives preference to topic destinations configured with persistent stores when publishing a persistent message on a distributed topic to avoid lost messages. If any topic members are unavailable when a persistent message is published, the message will be stored on the topic members with persistent stores and forwarded to the remaining topic members as they become available.

A TopicSubscriber created with a distributed topic name is pinned to a physical topic member of the distributed topic. Connection is maintained until the destination topic member fails. If and when that happens, the topic subscriber is sent a JMSException the same as when a destination queue member fails. If a topic destination member with active subscribers fails, then any persistent messages published to the distributed topic after the failure won't be delivered to the disconnected subscribers until the topic destination member can get messages again. In contrast, a subscriber that lost connection with a distributed topic member never gets a non-persistent message.

Figure 2 illustrates how JMS clients interact with a distributed topic in a WebLogic cluster. The distributed topic (DT) in the diagram consists of two physical topic members T1 and T2 residing on server instances 1 and 2, respectively. A topic subscriber (TS) gets messages from T1 and remains pinned to this topic member until the connection is closed or lost. A topic publisher (TP) sends messages to the distributed topic (DT). All physical topic members get the messages sent to the distributed topic. As the illustration shows, TP publishes Message1 to the distributed topic DT, which sends it to topic members T1 and T2. The next message Message2 is also sent to both topic members.

Clustering Connection Factories
Connection factories can also leverage the capabilities of JMS clustering to provide load balancing, high availability, and failover for JMS clients.

A connection factory can be targeted to a cluster, or targeted to individual WebLogic instances in a cluster, whether or not those WebLogic instances host a JMS server. From an architectural perspective, you need to consider the physical location of the JMS clients in your application and the destinations they will access before you decide where to target your connection factories. For example, if an external client uses a connection factory targeted to multiple WebLogic instances in a cluster, the decision of which connection factory to use as well as which JMS server will host the JMS connection is load-balanced.

The connection routing might be inefficient if the new JMS connection and the local JMS server are on the same physical machine. To avoid unnecessary connection routing, make sure the Server Affinity parameter for each connection factory in your JMS application is enabled. You can change this setting from the WebLogic Administration console by navigating to the "Services/JMS/Connection Factories" node. Select the connection factory from the list, and scroll to the bottom of the General tab to find the Server Affinity parameter.

In contrast, an internal JMS client trying to locate a connection factory and create a JMS connection won't incur a wasteful remote connection if the connection factory is physically located on the same WebLogic instance as the JMS server that hosts the desired destination. The default load-balancing policy used by the connection factory for internal JMS clients is circumvented because WebLogic gives preference to collocated connection factories and JMS servers to avoid creating remote connections.

The details of configuring and targeting connection factories in the context of different JMS application scenarios will be discussed in part two of this article.

Configuring a Distributed Destination

To configure a distributed queue or topic, open the WebLogic Administration console and navigate to the "Services/JMS/Distributed Destinations" node (Figure 3).

Now you can create a Distributed Topic or a Distributed Queue. In this example we'll create a distributed queue. To continue, click Configure a new Distributed Queue. Figure 4 displays the configuration parameters for creating a distributed queue.
Name: The unique logical identifier for the distributed queue destination.
JNDI Name: The lookup alias for cluster-wide JNDI.
Load Balancing Policy: The algorithm used to determine the distribution of messages among the distributed destination members.
Forward Delay: Only for distributed queues. It defines the amount of time a distributed queue member with no consumers will wait before forwarding its pending messages to other queue members with active consumers.

Once you create the distributed queue click the Thresholds and Quotas tab. The parameters on this screen are well documented so we'll skip the point of each setting, but in general the parameters let you configure the maximum message quotas, message thresholds, maximum message size, and bytes message paging. These settings apply globally to all physical members belonging to the distributed destination.

At this point, you're ready to add physical queue members to the distributed queue. If your WebLogic domain has queue destinations that you want to designate as members of your newly created distributed queue, click the Members tab (Figure 5), then click Configure a new Distributed Queue Member.

On the next screen, enter the configuration parameters of the physical queue member (Figure 6).

Name: The unique logical identifier for the member destination.
JMS Queue/Topic: The list of available queues from which you select the physical destination of new members of the distributed queue.
Weight: The integer that controls the message load balancing for the Round Robin algorithm. The weights are relative to the other queue member destinations in the same distributed queue; a higher value designates the members that get more messages than those with a lower setting.

Click the Create button to create the new distributed queue member. You can repeat this procedure for adding other members to the distributed queue.

WebLogic provides a convenient feature to create physical queues or topics automatically and assign them to a distributed destination.

To use this feature, click the Auto Deploy tab (Figure 7) and then click Create members on the selected Servers (and JMS Servers).

On the next screen you have the option of choosing the WebLogic instances directly, or the WebLogic cluster, to serve as the targets for the distributed destination.

If you want to target a WebLogic instance directly:
Select (none), then click Next (Figure 8)
Select the WebLogic instance on which to create destination members, then click Next (Figure 9)
Select the JMS servers on which to deploy the destinations members, then click Next (Figure 10)
     If you want to target to a WebLogic cluster:
Select the cluster name, then click Next (Figure 8)
Select the WebLogic instances on which to create destination members, then click Next (Figure 11)
Select the JMS servers on which to deploy the destinations members, then click Next (Figure 12)

After applying the configuration changes, WebLogic automatically creates member destinations on the JMS servers selected. Click the Members tab to see the list of newly generated member destinations (Figure 13).


At this point you should understand the basic features of JMS clustering and the kinds of distributed resources it provides. The next article will discuss JMS application design strategies for internal producers/consumers, external-only producers, highly distributed and load-balanced JMS applications, and the best practices and guidelines to follow when using JMS clustering.

More Stories By Raffi Basmajian

Raffi Basmajian works as an application architect for a major financial services firm in New York City. He specializes in architecting Java/J2EE distributed systems and integration frameworks. He can be reached at [email protected]

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.