As engineers and programmers, it’s fun to design, build, and program things! To justify the time invested in these activities, the results need to be high quality and satisfy a real-world need. Testing is essential to verify the system satisfies the real-world goals and requirements.
This article describes why developing a comprehensive Acceptance Test Plan (and then rigorously applying it) is important to overall project success.
An Acceptance Test Plan provides many significant benefits to the project process.
Improves System Quality
Perhaps the most obvious benefit of an Acceptance Test Plan is that it provides a systematic, trackable, structured, and unified approach to performing system verification and validation.
Saves Calendar Time
Save time by parallelizing the work of fully defining test cases up front (as the Acceptance Test Plan is formulated), instead of doing this in-line (thus on critical project path) during actual commissioning activities. This minimizes total calendar time required for testing activities.
Reduces Engineering Time Spent on Testing
Writing a good Acceptance Test Plan takes time, but performing those tests takes even MORE time. Developing a complete Acceptance Test Plan in advance allows the full team to review and refine the plan prior to starting commissioning. This facilitates consensus agreement on what to test how it should be tested. This makes commissioning time more focused and productive.
The Plan Gets Utilized Twice for a Dual Benefit
The Acceptance Test Plan can be applied both during the upfront development level or “simulated” testing phases, and during final commissioning of the completed system. Thus, many of the benefits of an effective test plan get magnified. Although some aspects of Acceptance Test plan won't apply to the preliminary testing stage, there is still a significant ROI across the subsequent stages.
Guides and Supports System Development
Good test plans inherently flush out any ambiguities, gaps, or misunderstandings relating to system requirements. Applying these corrections sooner saves time and money.
Establishes a Clearly, Concretely Defined Milestone for Completion
Define successful completion of the project, or at least the current phase. A good Acceptance Test Plan avoids ambiguity or discrepancies on what it means for the system to be “working the way it needs to work." Under the guidance of a good Acceptance Test Plan, all members of the team (both Client personnel and DMC) are striving towards a mutually understood goal.
Characteristics of a Good Acceptance Test Plan
As a point of reference to everyone involved in the project, DMC suggests that a good Acceptance Test Plan should exhibit the following characteristics:
- Once it's successfully executed, the Client is confident that the new system is properly doing its job, and that it will meet business needs.
- The Acceptance Test Plan should clearly define all test cases needed to accomplish the above goal. Each such case should:
- Define the relevant input parameters and setup (both hardware and software) that need to be in place to start the test.
- Define the process to execute that test case in clear, concrete, and actionable terms. This makes testing activities repeatable and self-documenting. Later, someone can look back and see a pass/fail result and exactly how it was created.
- Define the expected results and/or pass-fail criteria for that test in clear, unambiguous, quantifiable terms. Tests should be broken down to a level where the answer to "Did it do what it was supposed to do" is a very clear YES or NO answer.
- Bad: The Configuration GUI screen operates as expected.
- Good: When the “Save As” button is pressed, the operator is prompted with a “Browse” dialog, allowing them to select a file path and file name.
- Good: When "OK" is pressed on this Browse dialog, the program saves the parameter file in an INI format in the specified file path location with the specified file name. The parameter file shall contain all previously entered parameter values.
Return on Upfront Time Investment
The following questions commonly arise when considering Acceptance Test Plan development:
Is it worth it to spend this time to define how the system needs to be tested, and exactly how each test will be conducted?
Aren't we smart enough to figure this out as we go along?
When developing complex, customized, and mission-critical systems, omitting rigorous system testing from the project process is NOT an option. Doing so may detract from the quality of the final system and risk unacceptable failures and issues.
It is true that defining the full process for each test case is no small task and will take some time. It's critical to keep in mind that, to perform any commissioning tests, the work to define tests MUST HAPPEN EVENTUALLY.
We have two options for when to figure out these details:
- During acceptance testing - In this case, all work happens in-line with actual commissioning/testing activities. It is on the project critical path and adds time to overall project schedule.
- Before acceptance testing/commissioning - Include this information in a pre-defined Acceptance Test Plan. In this case, all time happens in PARALLEL to other critical path development activities, and IN ADVANCE OF actual commissioning activities. This effort is parallelized, off the critical path, and has limited impact on overall project schedule.
Another time-saving benefit is that, once documented in an Acceptance Test Plan, tests can be reliably repeated. If something fails once (which is going to happen many times in a complex system), having a good Acceptance Test Plan means that, after making corrective modifications to the system, any person will be able to conduct the same test in the same way it was originally done.
Moreover, this can be done without needing to re-formulate the steps and without requiring that the exact same people conduct the test. An Acceptance Test Plan adds structure, documentation, and repeatability to the test process.
Who Should Write the Acceptance Test Plan?
The generation of the Acceptance Test Plan should be a collaborative effort among all team members. Certain team members know specific aspects of the system better than others, and all members of the team are invested and share responsibility in making sure the system works properly and meets Client needs.
DMC suggests that the Client team is best suited to have primary responsibility for creating the Acceptance Test Plan.
Why is this the case?
The Acceptance Test Plan will be the primary way for the Client to exercise "quality assurance" over the new system and make sure it does the job it needs to do. When push comes to shove, "the job it needs to do" is a matter of the Client's business needs. Thus, the definition of how to check if the system is doing its job (i.e. the Acceptance Test Plan) is most meaningful if it comes from the relevant stake holders and end users on the Client’s team.
Additionally, having client stakeholders develop test cases creates a system of “checks and balances." A supporting purpose of a Acceptance Test Plan is to verify the developers correctly interpreted each requirement. For example, if DMC writes a test case about a particular requirement, it's inherently based our interpretation of the requirement, which will naturally align with how we implemented that requirement. The system is likely to pass that test case, whether our requirement interpretation is accurate.
Therefore, it's generally not best practice for developers to create the final test cases for their code. Test cases are often driven by those who are the "end users" of the given system. Having Client stakeholders develop test cases creates a system of “checks and balances” and enables the Client to act in an oversight capacity for system quality assurance.
Requirements vs. Test Cases
A common view on test strategies is that testing should consist of going through the original requirements document and checking that each requirement is satisfied. This makes sense, because a good Acceptance Test Plan should essentially be a set of test cases that provide 100% coverage of the system requirements. However, the requirements document is not an Acceptance Test Plan. There is a fundamental difference between a requirement (a statement of what a system shall be capable of) and a test case (a statement of conditions to establish, steps to follow, and criteria to evaluate to determine if a system is meeting a requirement). Yet there is a close correlation between requirements and test cases.
DMC recommends that one SHOULD be referencing the system requirements documentation while drafting the Acceptance Test Plan. We should be translating and expanding each requirement into a test case (or multiple test cases) that effectively checks whether the system is meeting that requirement. If we do this for each system requirement, we end up with a good Acceptance Test Plan that has 100% coverage of the requirements.
Best Practices for Commissioning Plans and Creating Test Cases
Below are a few useful reference articles that describe some of the considerations and best practices for defining good Test Cases:
• Top 13 Tips for Writing Effective Test Cases for Any Application
• Test Case Best Practices
• Software Testing Best Practices
Learn more about DMC's solution offering.