Introduction

Background

Scope

This Test Plan prescribes the scope, approach, resources, and schedule of the testing activities. It identifies the items being tested, features to be tested, testing tasks to be performed, personnel responsible for each task, and risks associated with this plan.

The scope of testing on the project is limited to the requirements identified in the project's Knowledge Repository Requirements Specification. The project has been broken up into four phases (Inception, Elaboration, Construction, and Transition) with one month iterations in each. Requirements for separate functional areas are determined at the start of each iteration. The impact of this on the Test Plan is that the specifics of the test plan and the test data will be determined as requirements are included in the SRS.

Resources

Related Documents

Software Test Strategy

Objectives

The Knowledge Repository will result in a production system that is fully functional with respect to the requirements. The overall object of this test plan is to provide unit, integration, and quality assurance testing for the whole of the delivered software. Unit testing is done during code development to verify correct function of source code modules, and to perform regression tests when code is refactored. Integration tests verify that the modules function together when combined in the production system. User acceptance testing verifies that software requirements and business value have been achieved.

Approach

The testing approach is to convert the use cases described in the use case document into a number of automated unit and integration tests to ensure the software conforms to the requirements. The following proposes the approach for testing the Knowledge Repository:

Some of the practices that the SemanticBits team will adopt are:

Unit Testing

A Unit test is used to test classes and other elements as programmers build them. JUnit and HTTPunit are two good frameworks for unit testing. We define and track tests using a testing dashboard called Hudson.  Every time code is changed, it is automatically and continuously built and tested.  Compilation or test failures are immediately sent to the development team for resolution.  Except during periods of major refactoring effort, the build/test should not be broken for more than 4 hours.  Hudson also generates regular reports on test coverage by lines of code, paths of execution, package, and component.  The goal for test coverage is 95% of the code, and the development team strives towards this goal in every iteration.  When appropriate test coverage is not met, dedicated time in the next iteration is given entirely to completing test coverage.

For testing components, that deal with database interactions (create, read, update, delete operations), it is important to use fresh database to ensure no existing data corrupt the integrity of the test results. Also for testing certain components it is imperative to have a certain set of data already in the repository. To achieve this repository level isolation from the actual component at the time of testing, we use DBUnit. This not only elegantly meets all the criteria above but also provides an automated and developer friendly way of building the pre-requisite repository.

Integration and System Testing

The purpose of the Integration and System Testing is to detect any inconsistencies between the software units that are integrated, called assemblages, or between any of the assemblages and hardware. SemanticBits follows what is commonly known as an 'umbrella' approach to integration/system testing, which requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in a bottom-up pattern. The outputs for each function are then integrated in a top-down manner. The primary advantage of this approach is the degree of support for early release of limited functionality, which aligns best with an incremental, agile approach. One scenario where the benefit of umbrella approach can be instantly seen is while testing web components, which require mock http inputs for elegant MVC integration testing. Using Spring Mock design and the umbrella approach, SemanticBits has efficiently written integration and system test cases for the web layer in several web applications including C3PR, caAERS and PSC.  Furthermore, SemanticBits has a long history of providing comprehensive automated tests.  We were the original developers of the caGrid testing infrastructure, which provides a mechanism to build, configure, deploy, and test entire caGrid components.  We regularly apply these approaches to all of our development projects.

Integration testing validates the integration of components within or across systems. These tests can and should be automated and require no external code dependencies though may require non-code dependencies (such as a database). A typical Integration Test pattern is to automatically deploy components, initialize them, test them, and tear them down.  We will apply Integration Tests specifically to the services, where we will follow the pattern above.  For example, we will be able to in an automated way build, configure, deploy, and cross-test each of the MDR, Model, and Knowledge Management services together.  This type of integration testing is necessary to insure that these critical systems can work together appropriately.

System testing validates deployed systems. These tests can and should be automated, though they require that systems (code) be manually deployed. These test often run end-user workflows that can be automated on existing (sometimes production) systems.  We can provide system tests for the deployed services and web applications.  This will be especially important starting in iteration 7 where SemanticBits will begin deploying against the NCI tiers.

Non-functional Testing

SemanticBits has in-depth experience developing and applying non-functional tests, such as performance, load, scalability, and stress testing. All these tests fall in the same category and are closely related to each other. Performance test touch on the aspect of a system's speed under a particular workload. This can be measured in terms of response time (the time it takes between initial request and the response).
Load is a measurement of the usage of the system whereby a server is said to experience high load when its supported application is being heavily trafficked.  Scalability tests whether an application will have a response time that increases linearly as load increases and will be able to process more and more volume by adding more hardware resources in a linear (not exponential) fashion.  This type of testing has two forms:

The purpose of load and scalability testing is to ensure the application will have a good response time during peak usage. Stress testing is a subset of load testing that is used to determine the stability of a given system or entity. It involves testing beyond operational capacity, often to a breaking point, in order to observe the results.  SemanticBits has extensive experience in using tools such as InfraRED for profiling and diagnosing problems associated with the non-functional aspects of the system. We have successfully used this tool for C3PR and caAERS, as well as understanding performance implications of the COPPA NCI Enterprise Services.

Description of Functionality

See the following documents:

Dependencies and Assumptions

Java Programming Language: the Knowledge Repository is developed in the Java programming language. The Java 6 SDK is being used for development. Integration tests and other tools and utilities will be written in ruby, groovy, or other appropriate languages that are useful in the testing environment. These languages provide some features that are not available in Java.

Application Server: The C3PR implementation requires a Java application server. Apache Tomcat and the Globus container will be used for development and testing.

Relational Database: The backend database targets both Postgres and Oracle relational databases. Unit tests will be run against both target databases.

Web Browser: User acceptance testing and integration testing will target the Internet Explorer 6.x/7.x and Firefox 2.x web browsers.

General Criteria for Success

Criteria for overall success are 100% success of all automated unit tests and most tests are satisfactory successful of the manual tests. Focus in phase I will be on automated testing, and focus in phase II will be on manual user acceptance testing and performance testing.

Readiness Criteria

Tests will be ready to be written when the following criteria have been met:

Tests will be ready to be run when:

Pass/Fail Criteria

The follow criteria will be employed for determining the success of individual tests:

Completion Criteria

The criteria for completion of the testing procedures is that the system produces the output desired by the user within expected performance requirements. Testing is considered completed when:

Acceptance Criteria

For user acceptance testing, a range of bug severities will be employed such that a severity can be assigned to the success of each test case. For example, a tester could assign acceptable, acceptable with issues, unacceptable. For unit, system, and integration testing, acceptance is determined by the automated test completing successfully.

When testing is complete, the software is acceptable when the test manager and project manager determine that existing unresolved issues are documented and within subjective tolerance. Both user acceptance testing and automated system/integration/unit tests will be taken into consideration.

Software Test Environment - General Environment

Subsequent sections are to describe the software test environment at each intended test site.

The Test Environment: The Test Environment is a stable area for independent system and integration testing by the Test Team. This area consists of objects as they are completed by Developers and meet the requirements for promotion. This environment ensures that objects are tested with the latest stable version of other objects that may also be under development. The test environment is initially populated with the latest operational application and then updated with new changed objects from the development environment.

The Acceptance Testing Environment: The acceptance-testing environment provides a near-production environment for the client acceptance testing. The release is delivered by the SCM group and managed by the client.

Software Items

Hardware and Firmware Items

Other Materials

None.

Participating Organizations

The testing group consists of the project's Test Manager, and the Tester(s). The groups listed below are responsible for the respective types of testing:

Test Schedules

The Test Manager will coordinate with the Project Manager and add the planned testing activities to the master project schedule. Refer to the project SDP and schedule for additional information.

Risks