Restricting the number of JMS / MQ connections made by the OSB

It is very easy to create JMS consuming services in the Oracle Service Bus (previously known as BEA AquaLogic Service Bus or ALSB), but one of the things that you may want to control is the number of connections that is used to poll a JMS server. This blog describes the background to JMS listeners in OSB and how to solve problems with the JMS server being overloaded with connections. In this particular case, the JMS server is actually an IBM Websphere MQ server but most of the principles also apply to different JMS implementations.

The background

If you create a proxy service that listens to a certain queue then under the hood you will find that OSB creates Message Driven Beans (MDB’s). You can see this yourself: if you would go into the Weblogic console and check under Deployments, you will see that an EAR file was deployed with a name that starts with “ProxyService_” and which also contains the project name (folder name in SBconsole) and your service name. This EAR file contains the MDB that is triggered by messages that are put on the queue that it is configured to listen to.

This is all great out of the box functionality and with very little effort you create your services. However tweaking the listeners is not so easy (unless of course you know how and I am about to tell you 🙂 )

In most production environments, OSB runs in a cluster of at least 2 managed servers. And per managed server, there will be a default of 16 JMS connections per polling MDB.  So for 1 proxy service x 2 servers x 16 connections you will automatically get 32 connections on the same MQ queuemanager. An MQ queuemanager is by default configured to have a maximum number of 100 open connections simultaneously, not much if you ask me but that is what IBM thought was a good enough choice. So if you would have 3 proxy services all listening to different queues on the same MQ queuemanager, that would mean 3 proxy services x 2 servers x 16 connections = 96 connections. Add a few connections to that because of other processes that connect to the same queuemanager (like for instance a business service that wants to put messages on a queue of the same queuemanager) and you will reach 100 very easily.

The effect of this is that MQ will stop accepting connections (and appears to hang) and in its log files you will find “AMQ9513: Maximum number of channels reached”.

The solutions

There are two possible solutions to this problem, you can either configure MQ or OSB to fix this problem:

Solve it at MQ side:

You could increase the MaxChannels and MaxActiveChannels settings in the qm.ini file of MQ. Do the math as explained above, add some connections for message producers (like business services; OSB likes to open at least 5 connections for them as well) and any other processes or users that are allowed to open connections to the queuemanager.

Solve it at the OSB side:

To solve it from the service bus, you should restrict the size of your EJB pool within Weblogic. Because an MDB is a special type of EJB, you EAR file containing the MDB gets deployed once but the MDB within it is spawned in the EJB pool 16 times. Normally, one can use max-beans-in-free-pool and initial-beans-in-free-pool in the weblogic.xml but since OSB automatically generates and deploys the EAR files for this flow this cannot be done without some dirty hacking (and we will not go there).

The real solution is to create a work manager with a max threads constraint and assign the proxy services dispatch policy to this work manager.

Go to the Weblogic console (/console) of your OSB installation and under Environment > Workmanagers, click the New button and create a Maximum Threads Constraint.

OSB max threads constraints

(WebLogic console, creating max threads constraints)

Give it a name and in the Count field, enter the maximum number of threads (and thus connections) that a single Proxy service / MDB may occupy. Target it to the whole cluster and when you finish, create yet another item in the Workmanagers screen. This time chose a Work Manager, also give it a name and assign it to the cluster.

Then in the Workmanagers screen click the name of you newly created Work Manager and assign the Maximum Thread Constraint that you created to it. Your workmanagers screen will have at least two new items (see screenshot). Click Save and Activate your changes.

OSB workmanager overview

(WebLogic console showing the workmanager overview)

After this, go into the service bus console (/sbconsole), find your proxy service and edit it. On the JMS Transport Configuration page you will find a dropdown called Dispatch Policy which is by default set to… well… default 🙂

Change this and select the Work Manager you just created on the Weblogic console. Save your service and your done.

OSB console dispatch policy

(OSB console, setting the dispatch policy)

Conclusion and remarks:

The two solutions do not have the exact same result, either you tell MQ to accept more connections or you constrain OSB to not create so many connections. Which solution you chose depends on a lot of different things like required throughput of messages and network policies so I cannot give a short answer to that.

One final remark on connections: since these MDB’s keep the connection with the JMS server open constantly, be careful how you stop your OSB domain. Especially in development it might be tempting to do a forced shutdown to save some time. However, I have seen multiple times where this leads to connections not being closed properly on the MQ side. So as a best practice: just use the normal shutdown procedure and while waiting: read our blog 🙂

Comments (7)

  1. krishna - Reply

    September 24, 2010 at 8:24 pm

    Hi,

    I tried using this approach for setting MaxThreadsConstraint for a proxy service in OSB 11g. But i dont think its actually working. When i give concurrent requests at a time. It simply pass overs everything with out any delay. I am not able to confirm that the DIspatch policy Max Trheads constraint is working. Could you please help. Thanks

  2. Tjeerd Kaastra - Reply

    September 27, 2010 at 9:58 am

    Hi Krishna,

    If I understand this correctly, you are trying to use the MaxThreadsConstraint to restrict the number of messages that may be handled simultaneous. But to achieve this, you should use Throttling. The MaxThreadsConstraint only limits the number of threads which can be used for example to limit the number of listening threads on a queue. You can check the number of open connections from your OSB server to your JMS server to see that this setting works (e.g. by using netstat or lsof on unix to see the established tcp connections).

    Throttling on the other hand limits the number of messages that can be processed by the business service. So if your business service cannot cope with too many concurrent messages, you can edit your business service and go to the Operational Settings tab. There you can set the Throttling State to enabled and enter the number of simultaneous messages in the field Maximum Concurrency.
    You could also set the max no. of messages to be held back in the Throttling Queue setting or the max wait time in Message Expiration but beware that these settings might remove messages if the load gets high!

    I hope that this explains things better. Regards, Tjeerd

  3. Ernie Mcginty - Reply

    January 31, 2011 at 7:48 pm

    Hi, I have a requirement to use an OSB MQ proxy service to pick messages off of a queue one at a time sequentially. How do these settings work in a clustered environment, there is nothing that indicates an active passive configuration like the MQ JCA adapter has with it's singleton property. How do I know that for 2 OSB servers each isn't taking request off of the queue concurrently?

  4. Tjeerd Kaastra - Reply

    February 3, 2011 at 10:27 am

    Hi Ernie,

    Well, actually the service bus by default uses simultaneous processing. These workmanagers work per node so if your cluster uses two nodes, both of them actually use at least one thread to read the queue. And hence there is a fair chance that you will have two messages processed simultaneous.
    I guess I do not have an out-of-the-box solution to this. WebLogic JMS has the unit-of-work settings to achieve this but I am not sure if you can use the same solution with MQ (over JMS).

    Best regards,

    Tjeerd

  5. Nikhil - Reply

    November 24, 2011 at 6:13 am

    Hi,

    We have 2 clustered weblogic servers (3 managed servers in each cluster and 1 jms server per managed server). The PS on 1st cluster is reading from a JMS queue on 2nd cluster. I noticed that the number of threads i.e consumers seen on the JMS queue was 32 on each JMS server in the 2nd cluster (this should be usually 16). When i create a workmanager in 1st cluster and give thread count as 2, the expected result is that each jms server should show 2 as consumer count i.e 6 threads for 3 managed servers but instead i see 4 as consumer count on each server i.e 12 threads for 3 managed servers. Is this a known issue?

    Regards,
    Nikhil

  6. swapnil - Reply

    May 16, 2012 at 10:05 pm

    hi
    I am working with db adapter in a clustered environment. I have set maxthread constraint for my proxy and throttling for my business=1. still I have issue. how to fix concurrency problem in case of db adapter based business service? any help?

  7. Nick - Reply

    November 8, 2012 at 7:28 am

    How to configure a time delay b/w two messages consumed by proxy service

Add a Comment