TDD

The "Performance Series" Part 1. Test Driven Performance.

Wilco Koorn

A number of my colleagues and myself recently decided to share our knowledge regarding "performance" on this medium. You are now reading the first blog in a series in which I present a test-driven approach to ensuring proper performance when we deliver our project.

Test driven

First of all note that "test-driven" is (or should be ;-) common in the java coding world. It is, however, applied to the unit-test level only: one writes a unit test that shows a particular feature is not (properly) implemented yet. The test result is "red". Then one writes the code that "fixes" the test, so now the test succeeds and shows "green". Finally, one looks at the code and "refactors" the code to ensure aspects like maintainability and readability are met. This software development approach is known as "test driven development" and is sometimes also referred to as "red-green-refactor". Read more

How to improve your TDD skills

Arjan Wulder

Do you think that you do TDD well because you have been doing it for years now? That is what I thought until I did an exercise called “TDD as if you mean it” and it put my feet back on the ground!

At two different TDD workshops I have tried to build an application following the rules of “TDD as if you mean it”. The first time was in Amsterdam at a Coderetreat and the second time at an XKE session at Xebia. Although I am practicing TDD for a while now, the result of the exercises in both sessions were that I had few tests, even less production code and an application that did not work.

 Read more

Agile says: Nothing will ever be perfect

Maarten Winkels

Wouldn't it be sweet if your whole life were perfect? Your wife would fulfill your every wish. Your children would be perfect examples of responsible happy people growing up. At work your colleagues are the nicest people and working with them is always fun. Your team would feel responsible for every action they (proactively) take and the software systems you produce and maintain are flawless and run like well oiled machines?...

You need to wake up! Nothing will ever be perfect and Agile knows it!
 Read more

How to Fail at Writing Bad Code

Age Mooij

Trying to produce bad quality code is quite hard when you are using Test Driven Development (TDD), even when you are doing it wrong on purpose.

Recently Iwein and me were preparing some labs for a developer training and the plan was to create some really bad quality Java code as a starting point. Students would then be asked to clean it up and add some new features, all this of course with the intent to show the effect of bad code quality on your ability to quickly add new features. This was going to be a piece of cake!

After some brainstorming for interesting standalone programming challenges, we came up with the idea of writing a JSON to XML converter. It should be able to take any valid JSON string and convert it into a simple XML representation. Out of habit and without really considering the option of skipping this step, we started with a simple failing test. Here it is:

@Test
public void shouldConvertTopLevelEmptyArray() {
    assertThat(converter.convert("[]"), is("<array></array>"));
}

Simple, right? To implement our converter we decided to use the well known "red, green, as little refactoring as possible" anti-pattern, which we expected to result in lots of copy-paste duplication, very long methods, and the other typical code smells we all know and loath. Our first implementation approach was to go for some all-time favorite candidates for producing bad code: string manipulation and regular expressions. As Jamie Zawinski famously said: "When some people have a problem, they think: 'I know, I'll use regular expressions'. Then they have two problems." We had created a sure thing recipe for disaster. It was going to be all downhill from here, or so we thought.

 Read more

Middleware integration testing with JUnit, Maven and VMware, part 3 (of 3)

Vincent Partington

Last year, before the Christmas holidays ;-), I described how we do middleware integration testing at XebiaLabs and I described the way we deploy test servlets by wrapping them in WAR and EAR files that get generated on the fly. There is only one thing left to explain; how do we integrate these tests into a continuous build using Maven and VMware?

Running the middleware integration tests

So let's start with the Maven configuration. As I mentioned in the first blog of this series, the integration tests are recognizable by the fact that the classnames end in Itest. That means they won't get picked up by the default configuration of the Maven Surefire plugin. And that is fortunate because we don't always want to run these tests. Firstly they require a very specific test setup (the application server configurations should be in an expected state, see below) and secondly they can take a long time to complete and that would get in the way of the quick turnaround we want from commit builds in our continuous integration system.
 Read more

Middleware integration testing with JUnit, Maven and VMware, part 2 (of 3)

Vincent Partington

Last week I wrote about the approach we use at XebiaLabs to test the integrations with the different middleware products our Java EE deployment automation product Deployit supports.

The blog finished with the promise that I would discuss how to test that an application can really use the configurations that our middleware integrations (a.k.a. steps) create. But before we delve into that, let us first answer the question as to why we need this. If the code can configure a datasource without the application server, it must be OK for an application to use it, right? Well, not always. While WebSphere and WebLogic contain some functionality to test the connection to the database and thereby verify whether the datasource has been configured correctly, this functionality is not available for other configurations such as JMS settings. And JBoss has no such functionality at all. So the question is: how can we prove that an application can really work with the configurations created by our steps?
 Read more

Middleware integration testing with JUnit, Maven and VMware, part 1 (of 3)

Vincent Partington

For Deployit, XebiaLabs' automated deployment product for Java EE applications, we are always building and modifying integrations with middleware systems such as IBM WebSphere, Oracle WebLogic and the JBoss application server. These integrations are small enough so that they can be rearranged to get many different deployment scenarios. A typical step, as we call these integrations, would be "Create WebSphere datasource" or "Restart WebLogic Server". So how do the test that code?

We've had some success using FitNesse and VMware to do integration tests on our deployment scenarios. But there were a few problems with this apporach:

  • We could only test complete deployment scenarios in this way. If we wanted to test just a single step, we had to make a deployment scenario that used that step just to be able to test it.
  • Because FitNesse does not provide any feedback while a test is running and the steps, let alone the deployment scenarios, can sometimes take a while to execute, there was little feedback on the progress.
  • While it is possible to debug a FitNesse Fixture using Eclipse the process is not very convenient when debugging a technical component such as this step.
  • To verify that a deployment scenario has executed succesfully we had to extend our FitNesse Fixture often. And while debugging code under test in FitNesse is complicated enough, debugging a Fixture is even harder!

Clearly we needed a different approach if we wanted to develop new steps easily.
 Read more

Mocking the 'unmockable': too much of a good thing?

Xebia Author

Static calls, final classes, objects created in test code: there are few things some of the current mocking frameworks cannot handle. Using powerful approaches like bytecode instrumentation or custom class loaders, these libraries make code that was previously a 'no go' area amenable to unit testing. This, moreover, in an elegant and convenient manner that will feel familiar to developers used to 'standard' mocking frameworks.
The question is: does such power perhaps come with hidden dangers? Might it be possible that the ability to test more could actually result in less code quality?
 Read more

Mocking the 'unmockable': too much of a good thing?

Andrew Phillips

Static calls, final classes, objects created in test code: there are few things some of the current mocking frameworks cannot handle. Using powerful approaches like bytecode instrumentation or custom class loaders, these libraries make code that was previously a 'no go' area amenable to unit testing. This, moreover, in an elegant and convenient manner that will feel familiar to developers used to 'standard' mocking frameworks.
The question is: does such power perhaps come with hidden dangers? Might it be possible that the ability to test more could actually result in less code quality?
 Read more

Testing the testers: code samples from a TDD framework comparison

Andrew Phillips

Some while back I was preparing a presentation on mocking and testing frameworks for Java. As part of the aim was to demonstrate some real, running code, I ended up spending quite some time copying, pasting, extending and correcting various examples gleaned from readmes, Javadoc, Wiki pages and blog posts. Since then, this codebase has been extended with various new features I've come across, and I've often referred to it for experiments, as a helpful reference, and suchlike.

I imagine this kind of "live" reference could also be useful to others, so I thought I'd share it. Read more