It is very easy to create JMS consuming services in the Oracle Service Bus (previously known as BEA AquaLogic Service Bus or ALSB), but one of the things that you may want to control is the number of connections that is used to poll a JMS server. This blog describes the background to JMS listeners in OSB and how to solve problems with the JMS server being overloaded with connections. In this particular case, the JMS server is actually an IBM Websphere MQ server but most of the principles also apply to different JMS implementations.

The background

If you create a proxy service that listens to a certain queue then under the hood you will find that OSB creates Message Driven Beans (MDB’s). You can see this yourself: if you would go into the Weblogic console and check under Deployments, you will see that an EAR file was deployed with a name that starts with “ProxyService_” and which also contains the project name (folder name in SBconsole) and your service name. This EAR file contains the MDB that is triggered by messages that are put on the queue that it is configured to listen to.

This is all great out of the box functionality and with very little effort you create your services. However tweaking the listeners is not so easy (unless of course you know how and I am about to tell you 🙂 )

In most production environments, OSB runs in a cluster of at least 2 managed servers. And per managed server, there will be a default of 16 JMS connections per polling MDB.  So for 1 proxy service x 2 servers x 16 connections you will automatically get 32 connections on the same MQ queuemanager. An MQ queuemanager is by default configured to have a maximum number of 100 open connections simultaneously, not much if you ask me but that is what IBM thought was a good enough choice. So if you would have 3 proxy services all listening to different queues on the same MQ queuemanager, that would mean 3 proxy services x 2 servers x 16 connections = 96 connections. Add a few connections to that because of other processes that connect to the same queuemanager (like for instance a business service that wants to put messages on a queue of the same queuemanager) and you will reach 100 very easily.

The effect of this is that MQ will stop accepting connections (and appears to hang) and in its log files you will find “AMQ9513: Maximum number of channels reached”.

The solutions

There are two possible solutions to this problem, you can either configure MQ or OSB to fix this problem:

Solve it at MQ side:

You could increase the MaxChannels and MaxActiveChannels settings in the qm.ini file of MQ. Do the math as explained above, add some connections for message producers (like business services; OSB likes to open at least 5 connections for them as well) and any other processes or users that are allowed to open connections to the queuemanager.

Solve it at the OSB side:

To solve it from the service bus, you should restrict the size of your EJB pool within Weblogic. Because an MDB is a special type of EJB, you EAR file containing the MDB gets deployed once but the MDB within it is spawned in the EJB pool 16 times. Normally, one can use max-beans-in-free-pool and initial-beans-in-free-pool in the weblogic.xml but since OSB automatically generates and deploys the EAR files for this flow this cannot be done without some dirty hacking (and we will not go there).

The real solution is to create a work manager with a max threads constraint and assign the proxy services dispatch policy to this work manager.

Go to the Weblogic console (/console) of your OSB installation and under Environment > Workmanagers, click the New button and create a Maximum Threads Constraint.

OSB max threads constraints

(WebLogic console, creating max threads constraints)

Give it a name and in the Count field, enter the maximum number of threads (and thus connections) that a single Proxy service / MDB may occupy. Target it to the whole cluster and when you finish, create yet another item in the Workmanagers screen. This time chose a Work Manager, also give it a name and assign it to the cluster.

Then in the Workmanagers screen click the name of you newly created Work Manager and assign the Maximum Thread Constraint that you created to it. Your workmanagers screen will have at least two new items (see screenshot). Click Save and Activate your changes.

OSB workmanager overview

(WebLogic console showing the workmanager overview)

After this, go into the service bus console (/sbconsole), find your proxy service and edit it. On the JMS Transport Configuration page you will find a dropdown called Dispatch Policy which is by default set to… well… default 🙂

Change this and select the Work Manager you just created on the Weblogic console. Save your service and your done.

OSB console dispatch policy

(OSB console, setting the dispatch policy)

Conclusion and remarks:

The two solutions do not have the exact same result, either you tell MQ to accept more connections or you constrain OSB to not create so many connections. Which solution you chose depends on a lot of different things like required throughput of messages and network policies so I cannot give a short answer to that.

One final remark on connections: since these MDB’s keep the connection with the JMS server open constantly, be careful how you stop your OSB domain. Especially in development it might be tempting to do a forced shutdown to save some time. However, I have seen multiple times where this leads to connections not being closed properly on the MQ side. So as a best practice: just use the normal shutdown procedure and while waiting: read our blog 🙂