QA&TEST 2011 Conference Impression

Cirilo Wortel

Last week I joined the QA&TEST conference in the beautiful town of Bilbao. In this post I’ll give an impression of some of the presentations I attended to and the idea’s I picked up. Most valuable sessions I attended were “Pushing the Boundaries of User Experience” by Julien Harty and “Automated Reliability Testing via hardware interfaces” by Bryan Bakker. Read about it in more detail in the article.

I was so fortunate to be invited to present at the 10th edition of the QA&TEST. The conference focuses on QA and testing for embedded systems, an area I know little about. Though, when I first started testing in the late nineties, I was so lucky to work with a gentleman that had lost his job at Fokker (the airplane manufacturer had just gone bankrupt). At Fokker he used to test instruments used in the airplanes, like altitude meters, speedometers and other fine machinery. He knew very little about computers and was struggling with was to me basic stuff (being a rookie myself), but he was also one of the most structural and detail critical people I ever worked with. Besides loads of test techniques and responsibility for my work, what I learned from that experience is that testing is testing and the same rules apply in most fields. So after contemplating the above I happily accepted the invitation. What I experienced during the conference is that testing embedded systems is about software testing and lucky for me, in this field Agile methodologies and test automation can be applied just the same.

Automated Reliability Testing via hardware interfaces by Bryan Bakker

How test automation can successful be applied was highlighted by fellow speaker Bryan Bakker who did a great presentation of a case study on how his team. The team was requested to improve the reliability of a system and call it in an act of rebellion kept on increasing test automation rather than adding new functionality. This resulted in (if I remember the correct figures) a savior of over 1.2 million euro’s on damage that bugs would have otherwise caused in production. The result was so spectacular that they were granted extra budget to continue with their work, but also to apply the approach in other projects. Interesting takeaway from his presentation, that seems a bit more specific for embedded systems but might apply in other area’s as well, was a smart test-scheduler they had introduced. The product under test was a medical x-ray device that under circumstances would get an overheated engine, it took hours for it to cool down again which made it impossible to continue the test run. Whenever this was detected they would automatically switch to other types of tests, that were less dependent on the engine and could run until the engine had cooled down again. A simple but effective time-saver coming from a pragmatic mind.

Continuous Quality Improvement using Root Cause Analysis by Ben Linders

A session I unfortunately did not actually attend to, but that I had an interesting discussion about with the presenter, was the conference closing presentation by Ben Linders called “Continuous Quality Improvement using Root Cause Analysis”. He claims that a team can accurately predict the number of bugs it is going to make during a sprint and he has developed a method to help reduce this number using root cause analysis. I found this a fascinating and somewhat controversial idea, I have yet to meet a developer (specially the ones in Agile projects) that can admit let alone predict they’re making bugs. But as I understood it works in a similar way as predicting velocity, you get more accurate as you go, using historic data from previous sprints.

Runaway Test Automation Projects by Michael Stahl

The presentation of Michael Stahl titled “Runaway Test Automation Projects” seemed less relevant to Agile environments (at least the once I worked in), he however pointed out lots of valid risks that exist when doing test automation. Main take away for the audience in my opinion was the point that test automation should be treated as “regular” software, creating unit tests, applying quality standards and using version control. Things that in my experience sound like common sense, but never the less very true.

Pushing the Boundaries of User Experience by Julien Harty

The presentation that was probably of the most added value to me was called “Pushing the Boundaries of User Experience”, by Ebay’s Julien Harty. He had an enlightening story on automated user experience testing. With crawlers like Crawljax analysis of the dynamic (Ajax) behavior of a website can be done. With static analysis accessibility issues can be found, stuff that can be very important if you want to comply to WCAG guidelines, which hardly ever gets proper attention. Accessibility is often seen as something to facilitate a minority, but from a development point of view it helps to improve testability of the product What makes accessibility testing feasible from a business perspective is that it actually helps to improve search engine optimization, which increases the visibility of the site in search engines.
He explained how at Ebay they run automated tests for improving usability and accessibility, but also for finding layout issues and browser dependencies in a cost effective way.
Layout bugs can be found in an extremely simple but effective way using FightingLayoutBugs. Let me describe this in a little more detail.
How does this work? First you need to know which pixels belong to text, second you need to know which pixels belong to a horizontal or vertical edge, now if text pixels and edge pixels overlap you have a layout bug. Sounds pretty simple and it actually is!
How is text detected? All text of a page is set to a white font color and a snapshot is taken. All the text on a page is set to black and a snapshot is taken. Now when the two are compared, all different pixels are probably text.
How are horizontal and vertical lines detected? First set all the text on the page to transparent (btw setting the text color is done using jQuery), so now there are only graphics, take a screenshot. Now it’s identified which sequences of pixels have a certain minimal length and the same or very similar color, only those that have a high contrast to the left or to the right are selected. The same approach applies for horizontal lines. Now compare the outcome with the identified text. When text and lines overlap there’s a layout bug (which happens to be automatically reported).

Ebay's test setup
Ebay's test setup

During a life demo Julian ran some of these tools against the conference website. With web-accessibility-testing pointed out how bad the accessibility was for people that rely on tabbing to navigate through the screen. Static analysis of the site pointed out the page contained url’s with duplicate or no alt texts. Stuff that seems of minor severity but for disabled people essential. He concluded by revealing a security issue that caused some hilarity. He gave away the most lucrative reduction code for registration. It turned out all reduction codes for the conference were hardcoded in the Javascript in the page source.

All together it has been a valuable learning experience, beside the fact that a visit to Bilbao alone is definitely worth the trip!

Comments (4)

  1. Rene - Reply

    November 4, 2011 at 9:19 am

    Good stuff, thanks Cirilo!
    I need to start follow Julian's blog, he seems to write quite useful things about web accessibility and layout

  2. Ben Linders - Reply

    November 4, 2011 at 4:55 pm

    Hi Cirilo,

    Thanks for the discussion that we had, and your interesting presention. I really liked how you gave a demo of automated testing, really hands on.

    Indeed, you can estimate the number of bugs that you will make as a developer, and testers can estimate how many they expect to find. The difference between those figures are the ones that will appear at your customers; which you want to happen minimally. The approach to measure and control quality in this way is decribed in http://www.benlinders.com/2011/measuring-and-controlling-product-quality/

    Initially you might be way off with your estimates, but it will get better, and it doesn't have to be perfect. As long as the metrics helps you to:
    - Do whatever is needed to prevent bugs
    - Try to find the bugs as early as possible

    The method has been tried with agile and non agile projects, and has helped several customers to take better decisons on testing, and on preventing defects; resulting in better quality products at lower costs.

    Best regards,
    Ben Linders

    • cirilo - Reply

      November 9, 2011 at 10:41 am

      Hi Ben,

      I have read your article with great interest and I am definitely going to give this a try.We work in an agile context and advocate zero tolerance on bugs, for improving our process and the products "real" quality it is vital to get feedback from production. If the bugs that actually do slip through are thoroughly analyzed a team really has an extra powerful tool to improve their work.

      Regards,

      Cirilo

  3. QA&TEST 2011 Conference | Jus4u - Reply

    November 5, 2011 at 8:59 pm

    [...] A session I unfortunately did not actually attend to, but that I had an interesting discussion about with the presenter, was the conference closing presentation by Ben Linders called “Continuous Quality Improvement using Root Cause Analysis”. He claims that a team can accurately predict the number of bugs it is going to make during a sprint and he has developed a method to help reduce this number using root cause analysis. I found this a fascinating and somewhat controversial idea, I have yet to meet a developer (specially the ones in Agile projects) that can admit let alone predict they’re making bugs. But as I understood it works in a similar way as predicting velocity, you get more accurate as you go, using historic data from previous sprints.  –Cirilo Wortel [...]

Add a Comment