I often work on large projects where at least some elements of the solution are bespoked for the client, whether it’s code, application configuration, administration processes or workflows.
Just designing the solution isn’t enough. As learning solutions architects, it’s my responsibility to make sure the finished product meets the client’s requirements, is fit for purpose, and won’t break the first time someone uses it.
Having a rigorous testing process is essential to fulfilling that responsibility. It allows me to document what has been tested, by whom and when, and to map those tests back to the client’s requirements.
Each part of the solution should really be tested, unless you are using off-the-shelf software, with no special configuration, no customisation, and no client-specific processes. Until your whole system can be called “tried and tested” then testing should remain a priority.
Depending on the size of your project, and the number of interlinked systems or processes, you may have a number of different stages of testing activity, such as:
Each stage will have Test Plans, that link back to a Test Specification which contains Test Cases. In many situations, you will need to relate your test cases to one or more Requirements Document(s) which will have resulted from your discussions with the client. The Test Plans are executed for each Release, and results collected along with appropriate evidence.
Again, please note that this does not just apply to software – it applies to any system or process that you have to ensure works in a consistently reproducible way.
As you can imagine, managing testing across multi-disciplinary teams, with multiple systems and a suite of Requirements Documents, can be pretty complex – particularly if you need to maintain an audit trail showing what has been done, when, why and by whom.
Spreadsheets are often the first port of call for collecting the information together. However, very quickly, you end up with multiple versions, used in different ways by different teams, with no easy way to track the data through them, and almost impossible to use to provide accurate reports.
The obvious solution is a multi-user, relational database that:
TestLink is such a product. It’s an open-source, PHP-based application. So, quite simple to setup yourself, if you have an available server.
As a TestLink administrator, I can set up multiple projects, and assigned users to each project, with different roles as appropriate.
There are then several main stages after that:
The video below introduces a couple of the key features of TestLink: test specification and test execution:
Even on it’s own, I’ve found using TestLink to be a great way of ensuring that the whole requirements gathering and testing process is done in a rigorous way.
Of course, that rigour is only as good as the test cases that have been written. It’s easy to fall into the trap of just writing test cases to prove that the system works as designed, without checking to see how far you can push it before it breaks.
There’s a bit of an art to writing test cases that try to replicate real user behaviour. Often, it’s not until you first try the whole system with real people, that you find many of the bugs. At that point, it’s important to create the process of replicating the bug as a test case so you can make sure it doesn’t get missed again.
My clients, if necessary, will have access to the Wyver Solutions TestLink system for use on your projects.
If you want to know more, please give me a call.
Based on an original post on Learning Conversations.
Posted: 10 August 2013