Because the webpages were written with AngularJS, Protractor was our framework of choice for our end-to-end test suite. But how to verify web service state in Protractor?
Concordion is a framework to support Behaviour Driven Design. It is based on JUnit to run tests and HTML enriched with a little Concordion syntax to call fixture methods and make assertions on test outcome. I won't describe Concordion because it is well documented here: http://concordion.org/.
Instead I'll describe a small utility class I've created to avoid code duplication. Concordion requires a JUnit class for each test. The utility I'll describe below allows you to run all Concordion tests without having a utility class for each test.
FitNesse is an acceptance testing framework. It allows business users, testers and developers to collaborate on executable specifications (for example in BDD style and/or implementing Specification by Example), and allows for testing both the back-end and the front-end. Aside from partly automating acceptance testing and as a tool to help build a common understanding between developers and business users, a selection of the tests from a FitNesse test suite often doubles as a regression test suite.
In a previous project I had to compete against the established experts when trying to introduce a sensible test automation approach.
The reasoning why not to work with suitable tools was mainly based on the fear of change, the unwillingness to automate and the build up idea that when you do automate, the only way to overcome problems is by having expensive licenses.
There are several good reasons why you should automate, but most important is that the team is confident about the quality of the product being developed and gets reliable and fast feedback when making changes.
I’ve found many people to care for a high unit test coverage. It tells you something about how well your code is tested. Or does it?
Unit tests typically test the smallest piece of code. It is an excellent strategy to write your tests in conjunction with the production code. The tests help you shape the interfaces and help explore the problem domain.
Big question is: does the business/product owner care? What do those tests tell him (or her) about the actual functionality delivered? Fairly little really, if any at all. This boils down to the next question: why care about unit test coverage then?
In an ideal continuous integration pipeline different levels of testing are involved. Individual software modules are typically validated through unit tests, whereas aggregates of software modules are validated through integration tests. When a continuous integration build tool like Jenkins is used it is natural to define different build steps, each step returning feedback and generating test reports and trend charts for a specific level of testing.
FitNesse is a lightweight testing framework that is meant to implement integration testing in a highly collaborative way, which makes it very suitable to be used within agile software projects. With Jenkins and Maven it is quite easy to trigger the execution of FitNesse integration tests automatically. When properly configured and bootstrapped, Jenkins can treat the FitNesse test results in a very similar way as it treats regular JUnit test results. Read more
In this blog I will show a way to do performance testing with Selenium. The reason I use Selenium for performance testing is that some applications use proprietary protocols between the application layer in the browser and the server.
So just capturing the traffic between the server and replaying modified traffic is not that simple.
An example is testing GWT applications. In a previous blog I wrote why this is difficult.
To create a test script in Selenium the first thing I do is record a test with Selenium IDE
After recording a script I export the script to JUnit3 (Remote Control). This will generate a JUnit test script which can be run to test the application.
The next thing you need is a solution to run a lot of JUnit test cases at the same moment.
Here you see a visual representation of the whole test chain.
Agile is a mindset. It comes with certain behaviour and a certain culture. As with many things most people and organisations have to go through some serious change before they can actually be successful within an Agile setting. Change is hard, and it takes time. I strongly believe that it helps when you simply understand what you're trying to achieve.
'Agile' is no buzzword or a complex management theory, it's natural behaviour for millions of people; It's not for managers. It's for everyone and it's easy to understand as long as you acknowledge that 'being Agile' has nothing to do with the process you follow or the tools you use. 'Being Agile' is about culture, behaviour and mindset.
This post intends to reword the Agile Manifesto in a way that makes its meaning obvious. Understanding something, doesn't mean you're immediately capable of doing it, but it's a very good first step and it will help you on your way.
During the last 15 years modern communication means have taken a giant leap. The world is becoming smaller and smaller; working closely together with colleagues around the world on a daily basis is a reality for many people. This is particularly true in offshoring IT. The benefits are clear: more qualified people and lower costs, but there are also challenges.
In this blog I will show how offshore software development can be improved by using Agile principles and best practices. I will show you how the Agile Tester and Agile Testing practice play an important role in this approach. I assume that the reader is familiar with Agile Software Development and Scrum.
A number of my colleagues and myself recently decided to share our knowledge regarding "performance" on this medium. You are now reading the first blog in a series in which I present a test-driven approach to ensuring proper performance when we deliver our project.
First of all note that "test-driven" is (or should be 😉 common in the java coding world. It is, however, applied to the unit-test level only: one writes a unit test that shows a particular feature is not (properly) implemented yet. The test result is "red". Then one writes the code that "fixes" the test, so now the test succeeds and shows "green". Finally, one looks at the code and "refactors" the code to ensure aspects like maintainability and readability are met. This software development approach is known as "test driven development" and is sometimes also referred to as "red-green-refactor". Read more