Monday, July 26, 2010

Testing Glossary

A
Acceptance Testing
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Acceptance Criteria
The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE]

Accessibility Testing
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Alpha Testing
Testing conducted internally by the manufacturer, alpha testing takes a new product through a protocol of testing procedures to verify product functionality and capability. In-house testing. This is the period before Beta Testing. In-house testing performed by the test team.

Anomaly
Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, et, or from someone's perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE]
Application Binary Interface (ABI)
A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

Application Programming Interface (API)
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Audit
An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specification, and/or procedures based on objective criteria, including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured [IEEE]

Audit Trail
A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [ISTQB]

Automated Software Quality (ASQ)
The use of software tools, such as automated testing tools, to improve software quality.

Automated Testing
• Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
• The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Availability
The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE]

B
Backus-Naur Form
Backus–Naur Form (BNF) is a metasyntax used to express context-free grammars: that is, a formal way to describe formal languages. [wikipedia]
 
Back-to-Back Testing
Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE]

Baseline
A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [IEEE]

Basic Block
A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing:
A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set
The set of tests derived using basis path testing.

Behavioral Testing
When you do behavioral testing, you specify your tests in terms of externally visible inputs, outputs, and events.

Benchmark Testing
(1) A standard against which measurements or comparisons can be made. (2) A test that is to be used to compare components or systems to each other or to a standard as in (1). [IEEE]

Beta Testing
Testing of a rerelease of a software product conducted by customers. Testing conducted at one or more customer sites by the end-user of a software product or system . This is usually a "friendly" user and the testing is conducted before the system is made generally available.

Binary Portability Testing
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Blink Testing
Looking for overall patterns of unexpected changes, rather than focusing in on the specifics. For example, rapidly flipping between two web pages which are expected to be the same. If they are not the same, differences stand out visibly. Or, rapidly scrolling through a large log file, looking for unusual patterns of log messages.

Bottom Up Testing
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundary Value Analysis
In boundary value analysis, test cases are generated using the extremes of the input domain, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".

Branch Coverage
The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage. [ISTQB]

Breadth Testing
A test suite that exercises the full functionality of a product but does not test features in detail.

Bug
A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Bug Triage or Bug Crawl or Bug Scrub
A meeting or discussion focused on an item-by-item review of every active bug reported against the system under test. During this review, fix dates can be assigned, insignificant bugs can be deferred.

Build Verification Test (BVT) or Build Acceptance Test (BAT)
A set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. This test is generally a short set of tests, which exercise the mainstream functionality of the application software.

C
CAST
Computer Aided Software Testing.

Cause-And-Effect Diagram
A diagram used to depict and help analyze factors causing an overall effect.

Capture/Replay Tool
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage. [ISTQB]

Code Freeze
The point in time in the development process in which no changes whatsoever are permitted to a portion or the entirety of the program's source code. [wikipedia]

Command Line Interface (CLI)
In Command line interfaces, the user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor. [wikipedia]

Commercial Off-The-Shelf Software (COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format. [ISTQB]

Compliance
The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISTQB]

Compliance Testing
The process of testing to determine the compliance of the component or system. [ISTQB]

Concurrency Testing
Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [IEEE]

Configuration Management
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to thise characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE]

Consistency
The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component. [IEEE]

Code Inspection
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Coding
The generation of source code.

Compatibility Testing
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Component
A minimal software item for which a separate specification is available.

Component Testing
See Unit Testing.

Concurrency Testing
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Conversion Testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cross-Site Scripting
Cross site scripting (XSS) is a type of computer security exploit where information from one context, where it is not trusted, can be inserted into another context, where it is. From the trusted context, an attack can be launched. [wikipedia]

Custom Software
Software developed specifically for a set of users or customers. The opposite is Commercial Off-the-shelf Software. [ISTQB]

Cyclomatic Complexity
A measure of the logical complexity of an algorithm, used in white-box testing.

D
Daily Build
A development activity where a complete system is complied and linked every day (usually overnight) so that a consistent system is available at any time including all latest changes. [ISTQB]

Data Dictionary
A database that contains definitions of all data items defined during analysis.

Data Flow Diagram
A modeling notation that represents a functional decomposition of a system.

Data Driven Testing
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Debugging
The process of finding and removing the causes of software failures.

Decision Coverage
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage. [ISTQB]

Defect
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system. [ISTQB]

Defect Density
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points). [ISTQB]

Defect Leakage Ratio (DLR)
The ratio of the number of defects which made their way undetected ("leaked") into production divided by the total number of defects.

Defect Masking
An occurrence in which one defect prevents the detection of another. [IEEE]

Defect Prevention
The activities involved in identifying defects or potential defects and preventing them from being introduced into a product. [SEI]

Defect Rejection Ratio (DRR)
The ratio of the number of defect reports which were rejected (perhaps because they were not actually bugs) divided by the total number of defects.

Defect Removal Efficiency (DRE)
The ratio of defects found during development to total defects

Deviation
A noticeable or marked departure from the appropriate norm, plan, standard, procedure, or variable being reviewed. [SEI]

Dependency Testing
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing
A test that exercises a feature of a product in full detail.

Direct Metric
A metric that does not depend upon a measure of any other attribute [IEEE]

Domain
The set from which valid input and/or output values can be selected. [ISTQB]

Driver
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [ISTQB]

Dynamic Testing
Testing software through executing it. See also Static Testing.

E
End User
The individual or group who will use the system for its intended operational use when it is deployed in its environment. [SEI]
 
Emulator
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Endurance Testing
Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

Equivalence Partition
A portion of an input or output for which the behavior of a component or system is assumed to be the same, based on the specification. [ISTQB]

Error
The occurrence of an incorrect result produced by a computer. A human action that produces an incorrect result. [IEEE]

Error Guessing
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them. [ISTQB]

Error Seeding
The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [ISTQB]

Exhaustive Testing
A test approach in which the test suite comprises all combinations of input values and preconditions. [ISTQB]

Extreme Programming (XP)
Extreme Programming (XP) is a method or approach to software engineering and the most
popular of several agile software development methodologies. [wikipedia]

F
Failure
Deviation of the component or system from its expected delivery, service or result. [ISTQB]

Failure Mode
A particular way, in terms of symptoms, behaviors, or internal state changes, in which a failure manifests itself.

Failure Mode and Effect Analysis (FMEA)
A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence [ISTQB]

Fault
An incorrect step, process, or data definition in a computer program. [IEEE]

Fault Tolerance
The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126]

Feature Freeze
The point in time in the development process in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience. [wikipedia]

Fishbone Diagram
A diagram used to depict and help analyze factors causing an overall effect. Also called a Cause-And-Effect Diagram or Ishikawa Diagram.

Fuzz Testing
A software testing technique that provides random data ("fuzz") to the inputs of a program. If the program fails (for example by crashing or by failing built-in code assertions), the defects can be noted. [wikipedia]
 
Functional Decomposition
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification:
A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing
See also Black Box Testing.
• Testing the features and operational behavior of a product to ensure they correspond to its specifications.
• Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

G
Gap Analysis
An assessment of the difference between what is required or desired, and what actually exists.

General Availability (GA)
The phase in which the product is complete, and has been manufactured in sufficient quantity such that is ready to be purchased by all the anticipated customers.

Glass Box Testing
A synonym for White Box Testing.

Gorilla Testing
Testing one particular module,functionality heavily.

Graphical User Interface (GUI)
Graphical user interfaces (GUIs) accept input via devices such as computer keyboard and mouse and provide graphical output on the computer monitor. [wikipedia]

Gray Box Testing
A combination of Black Box and White Box testing. Testing software against its specification but also using some knowledge of its internal workings.

H
Happy Path
A default scenario that features no exceptional or error conditions. A well-defined test case that uses known input, that executes without exception and that produces an expected output. [wikipedia]
 
High Order Tests
Black-box tests conducted once the software has been integrated.

I
Independent Test Group (ITG)
A group of people whose primary responsibility is software testing,

Incident
An operational event that is not part of the normal operation of a system. It will have an impact on the system, although this may be slight or transparent to the users. [wikipedia]

Incident Report
A document reporting on any event that occurred, e.g. during the testing, which requires investigation. [IEEE 829]

Incremental Testing
A disciplined method of testing the interfaces between unit-tested programs as well as between system components. Two types of incremental testing are often mentioned: Top-down and Bottom up.

Inspection
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

L
Link Rot
The process by which links on a website gradually become irrelevant or broken as time goes on, because websites that they link to disappear, change their content or redirect to new locations. [Wikipedia]

Load Testing
See Performance Testing.

Localization Testing
This term refers to making software specifically designed for a specific locality.

Loop Testing
A white box testing technique that exercises program loops.

M
Metric
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Memory Leak
A particular type of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. This condition is normally the result of a bug in a program that prevents it from freeing up memory that it no longer needs. [Wikipedia]

Method
A reasonably complete set of rules and criteria that establish a precise and repeatable way of performing a task and arriving at a desired result. [SEI]

Methodology
A collection of methods, procedures, and standards that defines an integrated synthesis of engineering approaches to the development of a product. [SEI]

Metric
A quantitative measure of the degree to which a system, component, or process possesses a given attribute [IEEE]

Monkey Testing
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Milestone
A scheduled event for which some individual is accountable and that is used to measure progress. [SEI]

Mutation Testing
Testing done on the application where bugs are purposely added to it.

N
Negative Testing
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

N+1 Testing
A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.

Non-functional Testing
Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability. [ISTQB] Oracle

P
Path Testing
Testing in which all paths in the program source code are tested at least once.

Pareto Analysis
The analysis of defects by ranking causes from most significant to least significant. Pareto analysis is based on the principle, named after the 19th-century economist Vilfredo Pareto, that most effects come from relatively few causes, i.e. 80% of the effects come from 20% of the possible causes. [SEI]

Performance Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE]

Positive Testing
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

Q
Quality Assurance
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control
The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Factor
A management-oriented attribute of software that contributes to its quality. [IEEE]

Quality Management:
That aspect of the overall management function that determines and implements the quality policy.

Quality Policy
The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

R
Race Condition
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Ramp Testing
Continuously raising an input signal until the system breaks down.

Recovery Testing
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Recoverability
The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISTQB]

Regression Testing
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Release Candidate
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Reliability
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISTQB]

Re-testing
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions. [ISTQB]

Risk
Possibility of suffering loss. [SEI]

Robustness
The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE]

S
Sanity Testing
Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

Scalability Testing
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertain that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. [ISTQB]

Soak Testing
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. [wikipedia]

Software Requirements Specification
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.

Software Testing
A set of activities conducted with the intent of finding errors in software.

Static Analysis
Analysis of a program carried out without executing the program.

Static Analyzer
A tool that carries out static analysis.

Static Analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts. [ISTQB]

Static Testing
Analysis of a program carried out without executing the program.

Storage Testing
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing
Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

Stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [IEEE]

System Testing
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Sunny-Day Testing
Positive tests. Tests used to demonstrate the system's correct working.

T
Testability
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Test
An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component. [IEEE]

Testing
• The process of exercising software to verify that it satisfies specified requirements and to detect errors.
• The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
• The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

Test Bed
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case
• Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
• A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Design Specification
A document detailing test conditions and the expected results as well as test pass criteria. [IEEE]

Test-Driven Development
Test-driven development (TDD) is a programming technique heavily emphasized in Extreme Programming. Essentially the technique involves writing your tests first then implementing the code to make them pass. The goal of TDD is to achieve rapid feedback and implements the "illustrate the main line" approach to constructing a program. [wikipedia]

Test Driver
A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness
A test environment comprised of stubs and drivers needed to execute a test. [ISTQB]

Test Incident Report
A document detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. [IEEE]

Test Item Transmittal Report
A document reporting on when tested software components have progressed from one stage of testing to the next. [IEEE]

Test Log
A document recording which test cases were run, who ran them, in what order, and whether each test passed or failed. [IEEE]

Test Procedure Specification
A document detailing how to run each testing, including any set-up preconditions and the steps that need to be followed. [IEEE]

Test Summary Report
A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from the Incident Reports. [IEEE]

Test Tool
A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [ISTQB]

Test Validity
The degree to which a test accomplishes its specified goal

Testability
Normally the term "testability" refers to the ease or cost of testing, or the ease of testing with the tools and processes currently in use. So, a feature might be more testable if you have all the right systems in place, and lots of time. Or it might not be very testable, because you have reached a deadline, and have run out of time and/or money. Sometimes, the term "testability" refers to requirements. There, it's used as a measure of clarity, so that you can know if the test of a requirement passes or fails. So, "the UI must be intuitive and fast" may not be very "testable", without knowing what is meant by "intuitive" and how you would measure "fast enough".
The degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context. Testability is not an intrinsic property of a software artifact and can not be measured directly (such as software size). Instead testability is an extrinsic property which results from interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context). A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all. [wikipedia]

Tester
A skilled professional who is involved in the testing of a component or system. [ISTQB]

Test Plan
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure
A document providing detailed instructions for the execution of one or more test cases.

Test Scenario
Definition of a set of test cases or test scripts and the sequence in which they are to be executed.

Test Script
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Thread Testing
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management
A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix
A document showing the relationship between Test Requirements and Test Cases.

U
Usability Testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions. [ISTQB]

Use Case
In software engineering, a use case is a technique for capturing the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by software developers and end users. [wikipedia]

User Acceptance Testing (UAT)
A formal product evaluation performed by a customer as a condition of purchase. Formal testing of a new computer system by prospective users. This is carried out to determine whether the software satisfies its acceptance criteria and should be accepted by the customer. User acceptance testing (UAT) is one of the final stages of a software project and will often be performed before a new system is accepted by the customer. [wikipedia]

User Interface (UI)
The user interface (also known as Human Computer Interface or Man-Machine Interface (MMI)) is the aggregate of means by which people interact with the system. The user interface provides means of input and output. [wikipedia]

User Interface Freeze
The point in time in the development process in which no changes whatsoever are permitted to the user interface. Stability of the UI is often necessary for creating Help documents, screenshots, marketing materials, etc.

Unit Testing
Testing of individual software components.

V
Validation
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

W
Walkthrough
A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Z
Zero Bug Bounce (ZBB)
The first moment in a project where all features are complete and every work item is resolved. This moment rarely lasts very long. Often within an hour, a new issue arises.

2 comments:

  1. Really nice topics you had discussed above. I am much impressed. Thank you for providing this nice information here.

    Software Testing Company

    QA Services

    Game Testing Services

    Video Game Testing Companies

    ReplyDelete
  2. Great Article… I love to read your qa and testing services articles because your writing style is too good, its is very helpful for all of us and I never get bored while reading your article because, they are becomes a more and more interesting from the starting lines until the end.

    ReplyDelete