Map Reduce is a programming model for writing algorithms that process large quantities of data in a (relatively) short time. The building blocks for the programs are very simple map and reduce functions. Writing programs that do more and more complex tasks to data based on those simple functions becomes harder and harder and thus requires more thorough testing in early stages. This blog attempts to outline a simple method for testing the algorithm of a Map-Reduce program based on scoobi.
It should be common knowledge that for certain types of automated tests, you do not want to rely on the availability of external services for a number of reasons:
- Uptime of said service (your tests fail if the service is unavailable)
- Dynamic nature of the data (makes your assertions harder)
- Execution speed of your tests
- Excess load generated on the service
Ideally, you therefore stub out the external service. Inside your unit tests, you do that using Mock Objects, for example. This is actually harder to do for integration tests - you do not use mock objects in integration tests, because that could change the observed behavior of your application.
In one of our projects, we've struggled with this problem for quite some time. There are two major components in it, an iPhone app and a server-side component, which both talk to an external webservice for retrieving the data to display on the app and to work with on the server. In our integration tests, we simply used the production webservice and ran some shallow assertions on the result with varying results.
Recently though, we drew the line. Running integration / UI tests using KIF for iOS on data that changes depending on what time it is ended up in unpredictable results, or assertions that we simply couldn't make because the data kept changing (and of course because KIF does not have any actual assertions, or is able to match on partially matching UI elements). So we said "Okay, we need predictable results - make that damn fake webservice already."
What it needed to do was:
- Return fixed, predictable results with specific, recognised requests
- Forward the request to the currently used live webservice, so our existing tests don't all break
- (later) Add a feature to make the data returned variable, some tests rely on the test data returned to have dates that lie in the future
- Do not compromise the security - the live webservice requires HTTP authentication.
Of course, it also needed to be done quickly. We postponed making this fake webservice for a while because it seemed like a lot of work, but once we finally decided on making it, we figured "How hard can it be?". We've been waiting for an opportunity to use NodeJS for a while now, and as far as we could see, this was the ideal choice in this case - we have a REST-like webservice (readonly) that mainly does i/o (from the filesystem and the external webservice), and it should be easy and lightweight to build.
So we went to hack in a few steps. Read more for the whole article and the code.
Last week I joined the QA&TEST conference in the beautiful town of Bilbao. In this post I’ll give an impression of some of the presentations I attended to and the idea’s I picked up. Most valuable sessions I attended were “Pushing the Boundaries of User Experience” by Julien Harty and “Automated Reliability Testing via hardware interfaces” by Bryan Bakker. Read about it in more detail in the article.
Over the past five to ten years, continuous integration has become a no-brainer for every medium to large scale software development project. It's hard to imagine going back to not having every commit (or push) automatically trigger a build of the code and, most importantly, a test run of of the code. That test run will surely include unit tests, but setting it up to also run integration tests used to be harder. You'll need to automatically deploy the application to the target middleware environment and then run the integration tests against that environment.
The Deployit plugin for the new 3.3 release of Atlassian Bamboo adds the enterprise-scale deployment capabilities of XebiaLabs Deployit to Bamboo. This allows you to speed up your development process by adding automated deployment to your continuous integration setup and make the the first step towards continuous deployment and continuous delivery. Instead of deployment being a bottleneck to your development process, it will be be an integrated part of it. You can test your application on the target platform as soon as possible, find any platform incompatibility and deployment issues early on, and, when it's time to deploy to the production environment, your deployment will be quick and reliable.
When working on a mobile Android application, I was confronted with the fact that the backend server wasn’t available yet to deliver the REST service. But I needed a server or good dummy for testing the Android client against the REST services. So I began my search for a REST mock server.
I started out using the SoapUI REST functionality, but that still lacks a good implementation for my purpose of reacting on REST calls. I ended up with a 10-minute build-your-own REST mock using the Play framework. This blogs describes how this was accomplished.
As pointed out in an earlier post the importance of testing can not be understated.
In this post we will delve into BDD of Android apps.
There are a number of other testing tools for Android out there, such as Robolectric and Calculon. Robolectric improves the speed of running the test by executing it outside of the emulator. Calculon is a DSL for testing views and activities. As Robotium seems to most mature and reliable, it is my preference.
In my current position as Performance Engineer and in my past position as a Middleware Architect I did quite some work with closed source performance monitoring and analysis tools (i.g. CA Wily and later AppDynamics).
These tools are both expensive but also do quite a good job most of the times. In the same field there are more tools, but all in the same price range for as far as I know.
To name some: Foglight, Dynatrace, Newrelic, JXInsight, Tivoli Performance Viewer, Compuware Gomez.
Around 2006 several initiatives to create open source performance monitoring tools for java production environments started to appear.
This was mainly because AOP (Aspect Oriented Programming), the technology used in most of these products, was getting attention in the market and there were quite some developments in that area at the time.
I am interested to see how the open source community around these kind of products is evolving. The outcome is quite surprising…
Xebium creates a simple way to use Selenium IDE (low learning curve) and FitNesse (ease of maintenance) to it's fullest when it comes to maintaining a web application test suites.
Xebium is using the same keywords as Selenium IDE. This has the huge advantage that no person should learn another DSL. Since tests are stated this way, they can be copied between Selenium IDE and FitNesse without a hassle (the FitNesse formatter for Selenium IDE is rather trivial). And to be honest: as long as there are XPath and Regular Expressions in the code, it makes no sense to come up with a substitute for
When testing web interfaces, it’s convenient to use an intuitive tool like Selenium IDE, it’s easy to use and can be used by non-technical people, but it is solely meant for record and playback of test-scripts. One of its limitations is that it misses sufficient options for documenting and managing tests. Furthermore it misses an interface with the backend of the system under test (SUT), to setup preconditions for a test or for instance to manipulate or read from a database.
Fitnesse is a great tool to do just that, it has the Wiki to manage tests and it by default has a setup and teardown mechanism, it’s easy to add non invasive testfixtures to interface directly with your SUT. The downside is that it is incapable of doing webtests.
We now have the glue that combines the two, it's called Xebium!