Middleware integration testing with JUnit, Maven and VMware, part 1 (of 3)

Vincent Partington

For Deployit, XebiaLabs' automated deployment product for Java EE applications, we are always building and modifying integrations with middleware systems such as IBM WebSphere, Oracle WebLogic and the JBoss application server. These integrations are small enough so that they can be rearranged to get many different deployment scenarios. A typical step, as we call these integrations, would be "Create WebSphere datasource" or "Restart WebLogic Server". So how do the test that code?

We've had some success using FitNesse and VMware to do integration tests on our deployment scenarios. But there were a few problems with this apporach:

  • We could only test complete deployment scenarios in this way. If we wanted to test just a single step, we had to make a deployment scenario that used that step just to be able to test it.
  • Because FitNesse does not provide any feedback while a test is running and the steps, let alone the deployment scenarios, can sometimes take a while to execute, there was little feedback on the progress.
  • While it is possible to debug a FitNesse Fixture using Eclipse the process is not very convenient when debugging a technical component such as this step.
  • To verify that a deployment scenario has executed succesfully we had to extend our FitNesse Fixture often. And while debugging code under test in FitNesse is complicated enough, debugging a Fixture is even harder!

Clearly we needed a different approach if we wanted to develop new steps easily.

From FitNesse to JUnit

First of all we decided to ditch FitNesse as our testing framework. While it might be a very nice to tool do user acceptance testing and allow end users to write (or at least understand) tests, the technical nature of our product already ensured that we would not have tests for non-technical end-users. Coupled with the problems that FitNesse was giving us, this was enough reason to look to the ubiquitous JUnit. Clearly we are not writing unit tests but the JUnit framework lends itself to any kind of automated code testing. To differentiate these integrations test from the regular JUnit tests we choose classnames ending in Itest. This made sure that a regular Maven build would not execute them; by default the Surefire plugin only executes tests whose classname ends in Test (with a capital T).

Basic test approach

The basic approach used in our Itests is like this:

  • Assert that a Java EE configuration (for example a datasource or a deployed application) does not exist.
  • Execute the step that creates the Java EE configuration.
  • Assert that the Java EE configuration does exist.
  • Assert that the properties of the Java EE configuration (for example the datasource URL) have the expected values.
  • Execute the step that destroys the Java EE configuration.
  • Assert that the Java EE configuration no longer exists.

One could argue that this test actually tests two pieces of code: the create-step and the destroy-step. But a test for the correct destruction of the resource needs to set it up first anyway. And a test that creates a resource needs to clean up after itself to allow the next test to run correctly. This does mean that tests are dependent upon the previous test cleaning up correctly, but I will show in a later blog how you can use VMware to mitigate this problem.

Asserting that resources are created correctly

The hardest part of this approach is asserting that a Java EE configuration exists (or does not exist) and has the expected properties. For this we must inspect the configuration. Unfortunately each of the three application servers mentioned requires a different method do that.

Inspecting the IBM WebSphere configuration

To inspect the IBM WebSphere configuration we execute the wsadmin script below and then parse the output to build a Map<String,String> of the properties of the object that is pointed to by the containment path. If the script exits with a non-zero exit code we know that the object does not exist in the configuration.

# Read command line arguments
containmentpath = sys.argv.pop(0)

# Get object ID by containment path
objectid = AdminConfig.getid(containmentpath)
if objectid == "":
    print "Object with containment path " + containmentpath + " not found"
    sys.exit(1)

# Print object properties
print AdminConfig.show(objectid)

Inspecting the Oracle WebLogic configuration

Oracle WebLogic has a concept very similar to the containment path of IBM WebSphere but there is no specific name for that concept. So in the WLST script you see below we also call it a containment path. Apart from the fact that WLST requires you to connect to the administration server inside the script while wsadmin does the connecting for you based on its command line parameters, the script for WebLogic does the same work as the script for WebSphere:

# Read command line arguments
scriptname = sys.argv.pop(0)
username = sys.argv.pop(0)
password = sys.argv.pop(0)
url = sys.argv.pop(0)
containmentpath = sys.argv.pop(0)

# Connect to the WebLogic admin server
connect(username, password, url)

# List the properties of the object
ls(containmentpath)

# Disconnect and exit
disconnect()
exit()

Inspecting the JBoss configuration

JBoss does not have a Python-based administrative scripting interface but it does have twiddle. Twiddle's query command allows us to test for the existence of an MBean while the get command allows us to retrieve a property of an MBean. There is no command to get all the properties of an MBean but twiddle executes so fast that is not a problem to execute it multiple times.

To be continued...

All this allows us to verify that the configuration has been created correctly, but it still does not tell us whether an application can really use that configuration. I will discuss how to do that in the next part (but I promise you I won't make a series as long as the JPA implementation patterns series ;-)). And I'll also explain how Maven and VMware fit into all of this.

Comments (3)

  1. Lars Vonk - Reply

    December 7, 2009 at 9:57 pm

    Hi Vincent,

    You should indeed always use tools the team is most comfortable with. Mix and match. Use the best tools for the job. When something is really an unit test and if it is easier to write with junit you should of course use that.

    I have some remarks/questions though about your reasoning:

    1. You say "If we wanted to test just a single step, we had to make a deployment scenario that used that step just to be able to test it." and "And while debugging code under test in Fitnesse is complicated enough, debugging Fixture is even harder". Without knowing the code under test, these sound to me as typical signs of code smells. Why do you need to debug often? Why is a single step hard to test? To need to debug is often a smell of lack of (smaller) unit tests.

    2. "While it might be a very nice to tool do user acceptance testing and allow end users to write (or at least understand) tests, the technical nature of our product already ensured that we would not have tests for non-technical end-users".
    As I understand the product it will be used by sysadmins right? Do they understand/read/extend the junit tests? I hardly believe that. Do not underestimate the power of readable tests. It is not only providing them readable tests, but you also provide the ability to add tests. This is very good for building trust in an application (and team).

    Regards, Lars

  2. Vincent Partington - Reply

    December 8, 2009 at 10:57 am

    @Lars: Thanks for your comments. You make some good points! I'll reply to them one by one.

    1a. To start from the end of your point: I guess you could say that these tests are actually our unit tests. They are tests for what it is basically about 20 lines of Python code and some Java glue. It's just that they require a very big test fixture (in the non-Fitnesse meaning of the word ;-)) and that they integrate with the middleware. These tests are not the integration tests for our product itself but tests for the integration with the middleware. And because JUnit provides a nice way to run these steps in isolation we use JUnit to develop and therefore debug our Python/Java code. Using Fitnesse for that was just too painful (what with the remote attaching of the debugger and all).

    1b. To set up the actual integration test for our product we'd actually want to test from our Flex UI (or the command line interface) straight to the middleware. Most of our product integration woes had to do with the BlazeDS or Hessian serialization used by the Flex UI and the CLI respectively. For the Flex UI we've looked into all kinds of different Flex UI testing frameworks about a year ago but found none to be working very well. Recently I've seen good things from FlexMonkey so we'll try again soon.

    2. The product will indeed be used by sysadmins. And since they are usually not that versed in Java we chose Fitnesse in the first place. But since we are building a standard product and there are no non-Java-savvy (Java-unsavvy) people on our team, it really did not fit us that well. If this were a bespoke project where non-Java developers were more involved in the daily development of our product Fitnesse would certainly make sense.

    BTW, I've changed the title of this blog (and the URL) to reflect the kind of integration this blog is talking about. There was a typo in the old title anyway. ;-)

  3. cuppa-joe - Reply

    January 23, 2010 at 7:15 pm

    "Without knowing the code under test, these sound to me as typical signs of code smells. Why do you need to debug often?"

    Suggesting that a system does not need to be debuggable is a typical sign of developer smells.

Add a Comment