In this post, my colleague Barend showed how one can conditionally ignore certain tests in JUnit. In this post we will take a look at how this can be solved in TestNG, another popular testing framework. Read more
A useful technique that I reinvent every once in a while is conditionally ignoring JUnit tests. Unit tests are supposed to be isolated, but occasionally you hit something that makes assumptions about the environment, such as code that executes a platform-specific shell command or (more commonly) an integration test that assumes the presence of a database. To keep such a test from breaking unsuspecting builds, you can @Ignore it, but that means you have to edit the code to run the test in a supported environment.
Proper Maven projects put their integration tests in a separate source folder called src/it/java and put an extra execution of the maven-surefire-plugin into their pom.xml, tied to the integration-test phase of the Maven build lifecycle. This is Maven's recommended way of setting these up. It ties in beautifully with the pre-integration-test and post-integration-test phases that can be used to set up and tear down the environmental dependencies of the integration test suite, such as initializing a database to a known state. There is nothing wrong with this approach, but it's a bit heavy handed for the simplest of cases.
In these simple situations it's easier to just keep the integration tests in the src/test/java directory and run them along with all your other tests. However, you still need a way to trigger them only when the right environment is present. This is easily dealt with by writing your own JUnit TestRunner and some custom annotations, as shown below.
This year I have been working on several systems based on Akka, usually in combination with the excellent Spray framework to build fully asynchronous actor-driven REST servers. All in all this has been going very well but recently I had to reinvent a certain wheel for the third time (on the third project) so I thought it might be good to blog about it so others can benefit from it too.
The problem is very simple: Akka has great unit test support but unfortunately (for me) that support is based on ScalaTest. Now there is nothing wrong with ScalaTest but personally I prefer to write my tests using Specs2 and it turns out that mixing Akka TestKit with Specs2 is a little tricky. The Akka documentation does mention these problems and it gives a brief overview of ways to work around them, but I could not find any current example code online.
Couple of weeks ago I realised something. As an Agile tester it’s really hard to communicate bugs! Testers are known for bringing bad news, but it is not easy to do it correctly. Specially when you’re in a Scrum Team and the heat is really on with bugs or issues flying all around.
As part of testing and demonstrating our advanced deployment automation1 platform Deployit, we at XebiaLabs use a lot of cloud and Devops tooling to be able to handle all the different types of middleware we support and build, CI and Ops tooling with which we integrate2.
I was recently setting up a Vagrant3 environment to demonstrate Deployit's Puppet module, which automatically registers new Puppet-provisioned middleware with your deployment automation platform to enable application-tier deployments to it, and ended up wrestling for quite some time with a tricky VirtualBox problem.
In 2009 Steward Reid predicted that within 10 years 70% of all software development would be done with some form of Agile methodology. Due to the growing need for ‘’hands’’ this would result in having to employ also the less qualified testers on these projects. The first point he made is absolutely valid, the second point is only valid looking at it as a commercial opportunity (you don’t need hands if you work with qualified people), maybe he only said it to comfort the people who fear loosing their jobs because of this shift. It’s obvious that now Agile is becoming main stream there is a growing demand for qualified testers. Read more
Do you think that you do TDD well because you have been doing it for years now? That is what I thought until I did an exercise called “TDD as if you mean it” and it put my feet back on the ground!
At two different TDD workshops I have tried to build an application following the rules of “TDD as if you mean it”. The first time was in Amsterdam at a Coderetreat and the second time at an XKE session at Xebia. Although I am practicing TDD for a while now, the result of the exercises in both sessions were that I had few tests, even less production code and an application that did not work.
Map Reduce is a programming model for writing algorithms that process large quantities of data in a (relatively) short time. The building blocks for the programs are very simple map and reduce functions. Writing programs that do more and more complex tasks to data based on those simple functions becomes harder and harder and thus requires more thorough testing in early stages. This blog attempts to outline a simple method for testing the algorithm of a Map-Reduce program based on scoobi.
It should be common knowledge that for certain types of automated tests, you do not want to rely on the availability of external services for a number of reasons:
- Uptime of said service (your tests fail if the service is unavailable)
- Dynamic nature of the data (makes your assertions harder)
- Execution speed of your tests
- Excess load generated on the service
Ideally, you therefore stub out the external service. Inside your unit tests, you do that using Mock Objects, for example. This is actually harder to do for integration tests - you do not use mock objects in integration tests, because that could change the observed behavior of your application.
In one of our projects, we've struggled with this problem for quite some time. There are two major components in it, an iPhone app and a server-side component, which both talk to an external webservice for retrieving the data to display on the app and to work with on the server. In our integration tests, we simply used the production webservice and ran some shallow assertions on the result with varying results.
Recently though, we drew the line. Running integration / UI tests using KIF for iOS on data that changes depending on what time it is ended up in unpredictable results, or assertions that we simply couldn't make because the data kept changing (and of course because KIF does not have any actual assertions, or is able to match on partially matching UI elements). So we said "Okay, we need predictable results - make that damn fake webservice already."
What it needed to do was:
- Return fixed, predictable results with specific, recognised requests
- Forward the request to the currently used live webservice, so our existing tests don't all break
- (later) Add a feature to make the data returned variable, some tests rely on the test data returned to have dates that lie in the future
- Do not compromise the security - the live webservice requires HTTP authentication.
Of course, it also needed to be done quickly. We postponed making this fake webservice for a while because it seemed like a lot of work, but once we finally decided on making it, we figured "How hard can it be?". We've been waiting for an opportunity to use NodeJS for a while now, and as far as we could see, this was the ideal choice in this case - we have a REST-like webservice (readonly) that mainly does i/o (from the filesystem and the external webservice), and it should be easy and lightweight to build.
So we went to hack in a few steps. Read more for the whole article and the code.
Last week I joined the QA&TEST conference in the beautiful town of Bilbao. In this post I’ll give an impression of some of the presentations I attended to and the idea’s I picked up. Most valuable sessions I attended were “Pushing the Boundaries of User Experience” by Julien Harty and “Automated Reliability Testing via hardware interfaces” by Bryan Bakker. Read about it in more detail in the article.