NIH | National Cancer Institute | NCI Wiki  

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Scrollbar
iconsfalse

Page info
title
title

Panel
titleContents of this Page
Table of Contents
minLevel2

Introduction

Background

Include Page
seminfra:Knowledge Repository Project Backgroundseminfra:
Knowledge Repository Project Background

Scope

This Test Plan prescribes the scope, approach, resources, and schedule of the testing activities. It identifies the items being tested, features to be tested, testing tasks to be performed, personnel responsible for each task, and risks associated with this plan.

The scope of testing on the project is limited to the requirements identified in the project's Knowledge Repository Requirements Specification. The project has been broken up into four phases (Inception, Elaboration, Construction, and Transition) with one month iterations in each. Requirements for separate functional areas are determined at the start of each iteration. The impact of this on the Test Plan is that the specifics of the test plan and the test data will be determined as requirements are included in the SRS.

Resources

Include Page
seminfra:Knowledge Repository Teamseminfra:
Knowledge Repository Team

Related Documents

Include Page
seminfra:Knowledge Repository Documentation Tableseminfra:
Knowledge Repository Documentation Table

Software Test Strategy

Objectives

The Knowledge Repository will result in a production system that is fully functional with respect to the requirements. The overall object of this test plan is to provide unit, integration, and quality assurance testing for the whole of the delivered software. Unit testing is done during code development to verify correct function of source code modules, and to perform regression tests when code is refactored. Integration tests verify that the modules function together when combined in the production system. User acceptance testing verifies that software requirements and business value have been achieved.

Approach

The testing approach is to convert the use cases described in the use case document into a number of automated unit and integration tests to ensure the software conforms to the requirements. The following proposes the approach for testing the Knowledge Repository:

...

The purpose of load and scalability testing is to ensure the application will have a good response time during peak usage. Stress testing is a subset of load testing that is used to determine the stability of a given system or entity. It involves testing beyond operational capacity, often to a breaking point, in order to observe the results.  SemanticBits has extensive experience in using tools such as InfraRED for profiling and diagnosing problems associated with the non-functional aspects of the system. We have successfully used this tool for C3PR and caAERS, as well as understanding performance implications of the COPPA NCI Enterprise Services.

Description of Functionality

See the following documents:

Dependencies and Assumptions

Java Programming Language: the Knowledge Repository is developed in the Java programming language. The Java 6 SDK is being used for development. Integration tests and other tools and utilities will be written in ruby, groovy, or other appropriate languages that are useful in the testing environment. These languages provide some features that are not available in Java.

...

Web Browser: User acceptance testing and integration testing will target the Internet Explorer 6.x/7.x and Firefox 2.x web browsers.

General Criteria for Success

Criteria for overall success are 100% success of all automated unit tests and most tests are satisfactory successful of the manual tests. Focus in phase I will be on automated testing, and focus in phase II will be on manual user acceptance testing and performance testing.

Readiness Criteria

Tests will be ready to be written when the following criteria have been met:

...

  • Source code for individual modules is available and runnable
  • The tests are written
  • Dependent services are deployed

Pass/Fail Criteria

The follow criteria will be employed for determining the success of individual tests:

  • Appropriate data returned: equality comparison of results to locally cached data
  • Performance: documentation of performance in time and subjective determination that performance is acceptable for the complexity of the operation

Completion Criteria

The criteria for completion of the testing procedures is that the system produces the output desired by the user within expected performance requirements. Testing is considered completed when:

  • The assigned test scripts have been executed.
  • Defects and discrepancies are documented, resolved, verified, or designated as future changes.

Acceptance Criteria

For user acceptance testing, a range of bug severities will be employed such that a severity can be assigned to the success of each test case. For example, a tester could assign acceptable, acceptable with issues, unacceptable. For unit, system, and integration testing, acceptance is determined by the automated test completing successfully.

When testing is complete, the software is acceptable when the test manager and project manager determine that existing unresolved issues are documented and within subjective tolerance. Both user acceptance testing and automated system/integration/unit tests will be taken into consideration.

Software Test Environment - General Environment

This section describes Subsequent sections are to describe the software test environment at each intended test site.

General Environment

The Test Environment: The Test Environment is a stable area for independent system and integration testing by the Test Team. This area consists of objects as they are completed by Developers and meet the requirements for promotion. This environment ensures that objects are tested with the latest stable version of other objects that may also be under development. The test environment is initially populated with the latest operational application and then updated with new changed objects from the development environment.

...

  • Continuous build machine: TBD
  • Test deployment machine: TBD

Other Materials

None.

Participating Organizations

The testing group consists of the project's Test Manager, and the Tester(s). The groups listed below are responsible for the respective types of testing:

  • Unit Testing: Development team members from SemanticBits will be responsible for conducting the unit tests.
  • Integration Testing: Development team members from SemanticBits will be responsible for conducting the integration tests.
  • User Acceptance Testing: The QA team from NCI will perform User Acceptance Tests.

Test Schedules

The Test Manager will coordinate with the Project Manager and add the planned testing activities to the master project schedule. Refer to the project SDP and schedule for additional information.

Risks

Include Page
seminfra:Knowledge Repository Risk Matrixseminfra:
Knowledge Repository Risk Matrix