Martin van Vliet

Deployit and Puppet integration, part I

Martin van Vliet

At XebiaLabs, we build Deployit, the most advanced Application Release Automation (ARA) solution on the market. The main reason for customers to use our product is to speed up time to market for new software. The ability to deploy software, without errors and without down time, with the push of a button is a critical component in our customers' agile, continuous delivery and cloud strategies.

As part of those initiatives, many of our customers are also virtualizing their infrastructure. The functionality that makes Deployit ideal for deploying new releases also make it a perfect companion to an on-demand infrastructure strategy. When spikes in demand for applications hit, virtualized infrastructure makes it possible to scale up quickly and automatically. But this infrastructure is not terribly useful without an application running on it. Deployit ensures that the newly provisioned servers run the right version of the desired application (configuring loadbalancers, static HTML, Java or .NET applications and databases) and join in shouldering the increased load.
read more

QCon San Francisco 2009

Martin van Vliet

For a few years now, November has been the month of QCon San Francisco for me. So far it has proven an excellent conference with lots of thought-provoking presentations and conversations. This year was no exception. Read on for my personal high- and lowlights.
 Read more

Definition of Done for User Stories vs. Bugs

Martin van Vliet

One of the most important concepts in Scrum is the Definition of Done. With it,the Scrum team and stakeholders determine what exactly is needed to finish a user story. Typically it includes one or more of code complete, developer tested, documented and acceptance tested.

In my current project, the system we are building has been accepted by the client and is in production. At the same time, however, new development on the software is taking place. The bugs and user stories resulting from the first and second activity end up on the same product backlog and are worked on by the same Scrum team. For the user stories, we include coding, development testing and documentation in the Definition of Done. This has worked well for us and allowed us to create the system in the first place.

However, the bugs are a different story. These are defects in the already existing software that were found by either an external testing team or in production. At first, we were using the same Definition of Done for the bugs as for the user stories. When delivering software fixes for these bugs, our customer would regularly ask us a series of
questions we had no answers for:

  • which versions/applications are impacted by this bug?
  • which versions need to be patched?
  • does this affect the interface between applications A and B?
  • is there a workaround for it? Has it been documented?

We realized the Definition of Done for the bugs needs to be different from the ones used for regular user stories. By including the questions above, we create transparency about what we need to consider when solving bugs and are able to better meet the customer's expectations.

What do you think? Is it a good idea to use different DoDs for user stories in one backlog?

QCon San Francisco 2008 - Teamwork is an individual skill

Martin van Vliet

One of the best sessions of the first day of QCon for me was the talk "Teamwork is an individual skill" by Christopher Avery. The talk focused on skills and habits that we can learn to become effective team members. This is becoming more and more important since most of us are in the position that people we have no direct influence over determine whether we are successful or not. A software development team is a good example of this.

 Read more

Team norming and chartering

Martin van Vliet

Staffing a new project is always a challenge. Juggling availability, experience and personal preferences, the proper mix of people has to be found that can make the project successful. Once this has happened, the project can get going and the newly formed group is expected to get on with the business of creating software.

One thing that is often forgotten is that the "project team" really isn't a team at all yet -- they are a group of people who have been put together to accomplish a goal. Chances are that at least some of the people have not worked together before and that there is no common ground for them to become a team. According to Bruce Tuckman, their evolution as a team can be divided into four stages:

  1. Forming
  2. Storming
  3. Norming
  4. Performing

 Read more

QCon San Francisco

Martin van Vliet

Last week, the first QCon conference in the US was held. The conference is targeted at team leads, architects and project managers and aims to be the best environment for learning and networking. It is a relatively small-scale conference (about 400 attendees) that provided an intimate atmosphere and allowed for plenty of discussion and networking with other attendees and speakers. These are my impressions and topics I found most interesting.
 Read more

Handling bugs with Scrum

Martin van Vliet

On my current project, we are using Scrum to build a data processing and publishing application. Our aim is to deliver working, tested software each sprint. Our team includes testers that test the software we make, as we make it. Any bugs they find we try to resolve as soon as they are discovered. Sometimes, though, bugs can not be resolved in the sprint in which they are found. These bugs must be dealt with using the Scrum process.

We use the following process for this. At the end of a sprint, all unresolved bugs classified by the testers as major or higher are entered onto the product backlog as separate items. Issues of minor importance are collected in our bugtracking tool. At the planning meeting for the next sprint, the product owners select the highest priority items (including bugs) from the product backlog for inclusion on the sprint backlog. Items that are not selected remain on the product backlog, possibly for next sprints.

This process provides transparency about the workload left to the product owners, both in terms of user stories to be developed as well as outstanding issues. They can decide on the relative priority of these, giving them full control over the sprint scope. It also creates a bridge between our issue tracking system and the Scrum process, ensuring that these are not two separate worlds.

For this to work it is critical that the testers and the product owners agree on what constitutes a major versus a minor bug. If this is not the case, either unimportant issues show up on the product backlog (and will most likely remain there) or, worse, important issues are left in the minor category, with no visibility to the product owners. A regular review of minor issues with the product owners mitigates this risk and provides further transparency to stakeholders.

Masters of Java

Martin van Vliet

MoJPicture this: a dimly-lit basement-like room filled with PCs, a large projection screen showing the current standings and 30 teams of 2 people each hunched over a computer. Although just as hectic, this is not a multiplayer game of Quake or Doom, but rather the Masters of Java programming contest.

The contest runs for a whole day, with each team competing in 5 tasks (out of 6, each team was assigned one task they would not be competing in). Each task contains one or more Java source files, with one class or method still to be implemented. The client software (an applet) behaves as an IDE (though no handy Javadoc lookup or code completion) and is used to compile the code, run the testcases and submit the results.

Our team (named Inspiring 42 after a counting mishap :) ) participated for the first time. Although we did have experience in other programming contests, the MoJ format was new to us: the realtime program/compile/test cycle, the short timeline for each task (30 minutes) and the large visible clock (and accompanying sound effects, like the crash of a gong at the start of the round and an annoying bleep-bleep sound when another team submits a solution).

The organizers published a preview of the cases on the MoJ site days before the event. That a lot of the contestants had looked at these and guessed what the tasks might be showed during the first assignment (validating Sudoku puzzles). Solutions for the Sudoku problem were entered very quickly. We had also done our homework and managed to be the first competing team to succesfully submit a solution.

For the second task we had to implement a simple load-balancing algorithm. Again, we finished first. Everything was looking good and the main price (a SUN Opteron workstation) was almost ours!

The third task had to do with regular expressions and this was really tough for us. At the end of the alotted 30 minutes we still had not found the correct regular expressions to solve this problem. Funny enough, neither had the winning team -- they solved the problem by creating an anonymous inner class that parsed the input using a StringTokenizer. Hardly what the organizers had in mind, but it worked and earned them 20 points. I just wish we had thought of it! :)

Obviously, this debacle caused us to drop to the number two spot. Worse, the team in the #3 position did not compete in the first task, so they could pass us if they performed well in the task we would not compete in. This was the fifth task: performing MD5 hash calculations to find suspect employees. We did well (finishing first again) but that did not score us any points since we weren't competing. So now we were in the #3 position.

The last official assignment, AaiRobot, was a funny one. The idea was to guide a robot through a maze to it's home location. There was very little information to go on, though, only a bump sensor, light sensor (to sense the color of the tile the robot was on) and a test to see whether or not this was the destination square. There was no indication of how far the robot was from the final square.

Also, the last test case required the robot to walk along a corridor and, halfway through, turn left without any indication. After attempting to come up with a generic algorithm, we realized a different approach was needed and, using trial and error, we determined the exact location at which to turn left (using the total distance travelled by the robot) that worked for the test case and submitted the result.

Even though we gave it our best shot we never recovered from the regexp task. The robot-task was a weird one and I'm not sure I like it. In the end, we did not submit a correct algorithm but a program that manages to complete the testcases. A small change such as moving the destination square in the final test case left by one square will break our "solution".

All in all, the contest was a great experience and a lot of fun. We'll be sure to participate again next year. And you better believe we will have studied up on regular expressions by then! ;-)

Team Inspiring 42 a.k.a. Erik Rozendaal / Martin van Vliet