Basic testing terminalogy

Posted on Mar 18 2010 - 9:38am by Raj


Acceptance Criteria

The definition of the results expected from the test cases used for acceptane testing.  The product must meet these criteria before implementation can be approved.

Acceptance Testing

(1) Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the client to determine whether or not to accept the system. (2) Formal testing conducted to enable a user, client, or other authorized entity to determine whether to accept a system or component. 

Acceptance Test Plan

Describes the steps the client will use to verify that the constructed system meets the acceptance criteria. It defines the approach to be taken for acceptance testing activities. The plan identifies the items to be tested, the test objectives, the acceptance criteria, the testing to be performed, test schedules, entry/exit criteria, staff requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.

Audit / Controls Testing

A functional type of test that verifies the adequacy and effectiveness of controls and completeness of data processing results.

 

Auditability

A test focus area defined as the ability to provide supporting evidence to trace processing of data.

 

Backup and Recovery Testing

A structural type of test that verifies the capability of the application to be restarted after a failure.

Black Box Testing

Evaluation techniques that are executed without knowledge of the program’s implementation.  The tests are based on an analysis of the specification of the component without reference to its internal workings.

Branch Testing

A white box testing technique that requires each branch or decision point to be taken once.

Condition Testing

A white box test method that requires all decision conditions be executed once for true and once for false.

Conversion testing

A functional type of test that verifies the compatibility of converted programs, data and procedures with the “old” ones that are being converted or replaced.

Data flow Testing

Testing in which test cases are designed based on variable usage within the code.

Debugging

The process of locating, analyzing, and correcting suspected faults. Compare with testing.

Defect Management

A set of processes to manage the tracking and fixing of defects found during testing and to perform causal analysis.

Documentation and Procedures Testing

 

A functional type of test that verifies that the interface between the system and the people works and is usable.  It also verifies that the instruction guides are helpful and accurate.

Desk Check

Testing of software by the manual simulation of its execution.  It is one of the static testing techniques.

Detailed Test Plan

The detailed plan for a specific level of dynamic testing.  It defines what is to be tested and how it is to be tested. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. It also includes the testing tools and techniques, test environment set up, entry and exit criteria, and administrative procedures and controls.

Dynamic Testing

Testing that is carried out by executing the code.  Dynamic testing is a process of validation by exercizing a work product and observing the behavior of its logic and its response to inputs.

Error

(1) A discrepancy between a computed, observed or measured value or condition and the true specified or theoretically correct value or condi­tion.  (2) A human action that results in soft­ware containing a fault.  This includes omissions or misinterpretations, etc.  See Variance.

Error Guessing

A test case selection process that identifies test cases based on the knowl­edge and ability of the individual to antici­pate probable errors.

 

Error Handling Testing

 

A functional type of test that verifies the system function for detecting and responding to exception conditions.  Completeness of error handling determines the usability of a system and ensures that incorrect transactions are properly handled.

Execution

Procedure

A sequence of manual or automated steps required to carry out part or all of a test design or execute a set of test cases.

Expected Results

Predicted output data and file conditions associated with a particular test case.  Expected results, if achieved, will indicate whether the test was successful or not.  Gener­ated and docu­mented with the test case prior to execution of the test.

 

Fault

(1) An accidental condition that causes a functional unit to fail to perform its required functions  (2) A manifestation of an error in software. A fault if encountered may cause a failure. Synonymous with bug.

Full Lifecycle Testing

  

The process of verifying the consist­ency, completeness, and correctness of soft­ware and related work products (such as documents and processes) at each stage of the development life cycle.

 

Function Testing

 

A functional type of test, which verifies that each business function, operates according to the detailed requirements, the external and internal design specifications.

 

Functional Testing

Selecting and executing test cases based on specified function require­ments without knowledge or regard of the pro­gram structure.  Also known as black box testing.  See “Black Box Testing”.

 

Functional Test

Types

Those kinds of tests used to assure that the system meets the business requirements, including business functions, interfaces, usability, audit & controls, and error handling etc. See also Structural Test Types.

Implementation

(1) A realization of an abstraction in more concrete terms; in particular, in terms of hardware, software, or both.  (2) The process by which software release is installed in production and made available to end users.

Installation Testing

A functional type of test which verifies that the hardware, software and applications can be easily installed and run in the target environment.

 

Integration Testing

A level of dynamic testing which verifies the proper execution of application components and does not require that the application under test interface with other applications.   

 

Interface / Inter-system Testing

A functional type of test which verifies that the interconnection between applications and systems functions correctly.

JAD

An acronym for Joint Application Design.  Formal session(s) involving clients and developers used to develop and document consensus on work products, such as client requirements, design specifications, etc.

Level of Testing

Refers to the progression of software testing through static and dynamic testing.

Examples of static testing levels are Project Objectives Review, Requirements Walkthrough, Design (External and Internal) Review, and Code Inspection.

Examples of dynamic testing levels are: Unit Testing, Integration Testing, System Testing, Acceptance Testing, Systems Integration Testing and Operability Testing. 

Also known as a test level.

Lifecycle

The software development process stages.  Requirements, Design, Construction (Code/Program, Test), and Implementation.

 

Logical Path

A path that begins at an entry or decision statement and ends at a decision statement or exit.

Maintainability

A test focus area defined as the ability to locate and fix an error in the system. Can also be the ability to make dynamic changes to the system environment without making system changes.

 

Master Test Plan

A plan that addresses testing from a high-level system viewpoint. It ties together all levels of testing (unit test, integration test, system test, acceptance test, systems integration, and operability).  It includes test objectives, test team organization and responsibilities, high-level schedule, test scope, test focus, test levels and types, test facility requirements, and test management procedures and controls.

Operability

A test focus area defined as the effort required (of support personnel) to learn and operate a manual or automated system. Contrast with Usability.

 

Operability Testing

A level of dynamic testing in which the oper­ations of the system are validated in the real or closely simulated production environ­ment.  This includes verification of produc­tion JCL, installation procedures and operations proc­edures.  Operability Testing con­siders such fac­tors as performance, resource con­sumption, adherence to standards, etc.  Operability Testing is normally performed by Operations to assess the readiness of the system for implementation in the produc­tion environment.

 

Operational Testing

A structural type of test that verifies the ability of the application to operate at an acceptable level of service in the production-like environment.

Parallel Testing

A functional type of test, which verifies that the same input on “old” and “new” systems, produces the same results.  It is more of an implementation that a testing strategy.

Path Testing

A white box testing technique that requires all code or logic paths to be executed once. Complete path testing is usually impractical and often uneconomical. 

Performance

A test focus area defined as the ability of the system to perform certain functions within a prescribed time.

 

Performance Testing

A structural type of test which verifies that the application meets the expected level of performance in a production-like environment.

 

Portability

A test focus area defined as ability for a system to operate in multiple operating environments.

Quality Plan

A document which describes the organization, activities, and project factors that have been put in place to achieve the target level of quality for all work products in the application domain.  It defines the approach to be taken when planning and tracking the quality of the application development work products to ensure conformance to specified requirements and to ensure the client’s expectations are met. A

Regression Testing

A functional type of test, which verifies that changes to one part of the system have not caused unintended adverse effects to other parts.

 

Reliability

A test focus area defined as the extent to which the system will provide the intended function without failing.

 

Requirement

(1) A condition or capability needed by the user to solve a problem or achieve an objective.  (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. The set of all requirements forms the basis for subsequent development of the system or system component.

Review

A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval.  

Security

A test focus area defined as the assurance that the system/data resources will be protected against accidental and/or intentional modification or misuse.

 

Security Testing

A structural type of test which verifies that the application provides an adequate level of protection for confidential information and data belonging to other systems.

Software Quality

(1) The totality of features and characteristics of a software product that bear on its ability to satisfy given needs; for example, conform to specifications. (2)The degree to which software possesses a desired combination of attributes.  (3)The degree to which a customer or user perceives that software meets his or her composite expectations.  (4)The composite characteristics of software that determine the degree to which the software in use will meet the expectations of the customer.

Software Reliability

(1) The probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system as well as a function of the existence of faults in the software. The inputs to the system determine whether existing faults, if any, are encountered. (2) The ability of a program to perform a required function under stated conditions for a stated period of time.

Statement Testing

A white box testing technique that requires all code or logic statements to be executed at least once.

Static Testing

(1) The detailed examination of a work product’s characteristics to an expected set of attributes, experiences and standards.  The product under scrutiny is static and not exercized and therefore its behaviour to changing inputs and environments cannot be assessed. (2) The process of evaluating a program without executing the program. See also desk checking, inspection, walk-through.

 

Stress / Volume Testing

A structural type of test that verifies that the application has acceptable performance characteristics under peak load conditions.

Structural Function

Structural functions describe the technical attributes of a system.

Structural Test Types

Those kinds of tests that may be used to assure that the system is techni­cally sound.

 

Systems Integration Testing

A dynamic level of testing which ensures that the systems integration activities appropriately address the integration of application subsystems, integration of applications with the infrastructure, and impact of change on the current live environment.

System Testing

A dynamic level of testing in which all the components that comprise a system are tested to verify that the system functions together as a whole.

Test Bed

(1) A test environment contaning the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.

Test Case

(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercize a particular program path or to verify compliance with a specific requirement. (2) The detailed objectives, data, pro­cedures and expected results to conduct a test or part of a test.

 

Test Condition

A functional or structural attribute of an application, system, network, or component thereof to be tested. 

Test Data

The input data and file conditions associated with a specific test case.

Test Environment

The external conditions or factors that can directly or indirectly influence the execution and results of a test.  This includes the physical as well as the operational environments.  Examples of what is included in a test environ­ment are: I/O and storage devices, data files, programs, JCL, communication lines, access control and security, databases, reference tables and files (version controlled), etc.

 

Test Focus Areas

Those attributes of an application that must be tested in order to assure that the business and structural requirements are satisfied.

Test Level

See Level of Testing.

Test Log

A chronological record of all relevant details of a testing activity

Test Matrices

A collection of tables and matrices used to relate functions to be tested with the test cases that do so.  Worksheets used to assist in the design and verification of test cases.

Test Objectives

The tangible goals for assur­ing that the Test Focus areas previously selected as being relevant to a particular Business or Struc­tural Function are being validated by the test.

 

Test Plan

A document prescribing the approach to be taken for intended testing activities. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, entry / exit criteria, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.

 

Test Procedure

Detailed instructions for the setup, operation, and evaluation of results for a given test. A set of associated procedures is often combined to form a test procedures document.

Test Report

A document describing the conduct and results of the testing carried out for a system or system component.

Test Run

A dated, time-stamped execution of a set of test cases.

Test Scenario

A high-level description of how a given business or technical requirement will be tested, including the expected outcome; later decomposed into sets of test conditions, each in turn, containing test cases.

Test Script

A sequence of actions that executes a test case.  Test scripts include detailed instructions for set up, execution, and evaluation of results for a given test case.

 

Test Set

A collection of test conditions.  Test sets are created for purposes of test execution only.  A test set is created such that its size is manageable to run and its grouping of test conditions facilitates testing.  The grouping reflects the application build strategy. 

Test Sets Matrix

A worksheet that relates the test conditions to the test set in which the condition is to be tested.  Rows list the test conditions and columns list the test sets.  A checkmark in a cell indicates the test set will be used for the corresponding test condition.

Test Specification

A set of documents that define and describe the actual test architecture, elements, approach, data and expected results.  Test Specification uses the various functional and non-functional requirement documents along with the quality and test plans.  It provides the complete set of test cases and all supporting detail to achieve the objectives documented in the detailed test plan.

Test Strategy

A high level description of major system-wide activities which collectively achieve the overall desired result as expressed by the testing objectives, given the constraints of time and money and the target level of quality.  It outlines the approach to be used to ensure that the critical attributes of the system are tested adequately. 

Test Type

See Type of Testing.

Testability

(1) The extent to which software facilitates both the establishment of test criteria and the evaluation of the software with respect to those criteria. (2) The extent to which the definition of requirements facilitates analysis of the requirements to establish test criteria.

Testing

The process of exercising or evaluating a program, product, or system, by manual or automated means, to verify that it satisfies specified requirements, to identify differences between expected and actual results.

Testware

The elements that are produced as part of the testing process.  Testware includes plans, designs, test cases, test logs, test reports, etc. 

Top-down

Approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs.  Tested components are then used to test lower level components.  The process is repeated until the lowest level components have been tested.

Transaction Flow Testing

A functional type of test that verifies the proper and complete processing of a transaction from the time it enters the system to the time of its completion or exit from the system.

Type of Testing

Tests a functional or structural attribute of the system. E.g. Error Handling, Usability.  (Also known as test type.)

Unit Testing

The first level of dynamic testing and is the verification of new or changed code in a module to determine whether all new or modified paths function correctly.

 

Usability

A test focus area defined as the end-user effort required to learn and use the system. Contrast with Operability.

 

Usability Testing

A functional type of test which verifies that the final product is user-friendly and easy to use.

User Acceptance Testing

See Acceptance Testing.

White Box Testing

Evaluation techniques that are executed with the knowledge of the implementation of the program.  The objective of white box testing is to test the program’s state­ments, code paths, conditions, or data flow paths.

About the Author

Leave A Response