Sunday, 30 August 2015

V-MODEL

Software Testing-Testing Models-V Model

V-model
The V Model, while admittedly obscure, gives equal weight to testing rather than treating it as an afterthought.

Initially defined by the late Paul Rook in the late 1980s, the V was included in the U.K.'s National Computing Centre publications in the 1990s with the aim of improving the efficiency and effectiveness of software development. It's accepted in Europe and the U.K. as a superior alternative to the waterfall model; yet in the U.S., the V Model is often mistaken for the waterfall.

The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side.
Several testing strategies are available and lead to the following generic characteristics:
1) Testing begins at the unit level and works "outward" toward the integration of the entire system
2) Different testing techniques are appropriate at different points of S/W development cycle.
Testing is divided into four phases as follows:
a)Unit Testing
b)Integration Testing
c)Regression Testing
d)System Testing
e)Acceptance Testing
The context of Unit and Integration testing changes significantly in the Object Oriented (OO) projects. Class Integration testing based on sequence diagrams, state-transition diagrams, class specifications and collaboration diagrams forms the unit and Integration testing phase for OO projects. For Web Applications, Class integration testing identifies the integration of classes to implement certain functionality.

The meaning of system testing and acceptance testing however remains the same in the OO and Web based Applications context also. The test case design for system and acceptance testing however need to handle the OO specific intricacies.


Relation Between Development and Testing Phases
Testing is planned right from the URD stage of the SDLC. The following table indicates the planning of testing at respective stages. For projects of tailored SDLC, the testing activities are also tailored according to the requirements and applicability.


The "V" Diagram indicating this relationship is as follows

v-model
DRE: - Where A defects found by testing team. B defects found by customer side people during maintenance.
Refinement from v- model.
To decrease cost and time complexity in development process, small scale and medium scale companies are following a refinement form of VModel.

v1-model
Software Testing Phases
1. Unit Testing
As per the "V" diagram of SDLC, testing begins with Unit testing. Unit testing makes heavy use of White Box testing techniques, exercising specific paths in a unit�s control structure to ensure complete coverage and maximum error detection.
Unit testing focuses verification effort on the smallest unit of software design - the unit. The units are identified at the detailed design phase of the software development life cycle, and the unit testing can be conducted parallel for multiple units. Five aspects are tested under Unit testing considerations:
  • The module interface is tested to ensure that information properly flows into and out of the program unit under test. 
  • The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm�s execution.

  • Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

  • All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once.

  • And finally, all error-handling paths are tested.


  • Unit Test Coverage Goals:
    Path Coverage:
    Path coverage technique is to verify whether each of the possible paths in each of the functions has executed properly. A path is a set of branches of possible flow. Since loop introduces unbounded number of paths, the path coverage technique employs a test that considers only a limited number of looping possibilities.

    Statement Coverage
    The statement coverage technique requires that every statement in the program to be evoked at least at once. It verifies coverage at high level rather than decision execution or Boolean expressions. The advantage is this measure can be applied directly to object code & does not require processing source code.

    Decision (Logic/Branch) Coverage
    The decision coverage test technique seeks to identify the percentage of all possible decision outcomes that have been considered by a suite of test procedures. It requires that every point of entry & exit in the software program be invoked at least once. It also requires that all possible conditions for a decision in the program be exercised at least once.

    Condition Coverage
    This technique seeks to verify the accuracy of true or false outcome of each Boolean sub expression. This technique employs tests that measure the sub expressions independently.

    Multiple-Condition Coverage:
    It takes care of covering different conditions, which are interrelated.
    Unit Testing (COM/DCOM Technology):
    The integral parts covered under unit testing will be:
    Active Server Page (ASP) that invokes the ATL component (which in turn can use C++ classes) The actual component Interaction of the component with the persistent store or database and Database tables Driver for the unit testing of a unit belonging to a particular component or subsystem depends on the component alone. Wherever User Interface is available UI called from a web browser will initiate the testing process. If UI is not available then appropriate drivers (code in C++ as an example) will be developed for testing.

    Unit testing would also include testing inter-unit functionality within a component. This will consist of two different units belonging to same component interacting with each other. The functionality of such units will be tested with separate unit test(s).
    Each unit of functionality will be tested for the following considerations:
    Type: Type validation that takes into account things such as a field expecting alphanumeric characters should not allow user input of anything other than that.
    Presence: This validation ensures all mandatory fields should be present, they should also be mandated by database by making the column NOT NULL (this can be verified from the low-level design document).
    Size: This validation ensures the size limit for a float or variable character string input from the user not to exceed the size allowed by the database for the respective column.
    Validation: This is for any other business validation that should be applied to a specific field or for a field that is dependent on another field. (E.g.: Range validation � Body temperature should not exceed 106 degree Celsius), duplicate check etc.
    GUI based: In case the unit is UI based, GUI related consistency check like font sizes, background color, window sizes, message & error boxes will be checked.
    2. Integration Testing
    After unit testing, modules shall be assembled or integrated to form the complete software package as indicated by the high level design. Integration testing is a systematic technique for verifying the software structure and sequence of execution while conducting tests to uncover errors associated with interfacing.
    Black-box test case design techniques are the most prevalent during integration, although limited amount of white box testing may be used to ensure coverage of major control paths. Integration testing is sub-divided as follows:
    i) Top-Down Integration Testing: Top-Down integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
    ii) Bottom-Up Integration Testing:Bottom-Up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Since modules are integrated from the bottom up, processing required for modules sub-ordinate to a given level is always available and the need for stubs is eliminated.
    iii)Integration Testing for OO projects:
    Thread Based Testing Thread based testing follows an execution thread through objects to ensure that classes collaborate correctly.
    In thread based testing
    • Set of class required to respond to one input or event for system are identified;
    • each thread is integrated and tested individually
    • Regression test is applied to ensure that no side effects occur
    Use Based Testing
    Use based testing evaluates the system in layers. The common practice is to employ the use cases to drive the validation process
    In Use Based Testing
    • Initially independent classes (i.e., classes that use very few other classes) are integrated and tested.
    • Followed by the dependent classes that use independent classes. Here dependent classes with a layered approach are used
    • Followed by testing next layer of (dependent) classes that use the independent classes
    This sequence is repeated by adding and testing next layer of dependent classes until entire system is tested.
    Integration Testing for Web applications:
    Collaboration diagrams, screens and report layouts are matched to OOAD and associated class integration test case report is generated.
    3. Regression Testing
    Each time a new module is added as part of integration testing, new data flow paths may be established, new I/O may occur, and new control logic may be invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
    Regression testing may be conducted manually, by re-executing a subset of all test cases. The regression test suite (the subset of tests to be executed) contains three different classes of test cases: 
    • A representative sample of tests that will exercise all software functions.
    • Additional tests that focus on software functions and are likely to be affected by the change.
    • Tests that focus on the software components that have been changed. 
    As integration testing proceeds the number of regression tests can grow quite large. Therefore, the regression test suite shall be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred.
    4. System Testing
    After the software has been integrated (constructed), sets of high order tests shall be conducted. System testing verifies that all elements mesh properly and the overall system function/performance is achieved.
    The purpose of system testing is to fully exercise the computer-based system. The aim is to verify that all system elements and validate conformance against SRS. System testing is categorized into the following 20 types. The type(s) of testing shall be chosen depending on the customer / system requirements.
    Different types of Tests that comes under System Testing are listed below:
    • Compatibility / Conversion Testing: In cases where the software developed is a plug-in into an existing system, the compatibility of the developed software with the existing system has to be tested. Likewise, the conversion procedures from the existing system to the new software are to be tested.

    • Configuration Testing: Configuration testing includes either or both of the following:
      • testing the software with the different possible hardware configurations
      • testing each possible configuration of the software

      If the software itself can be configured (e.g., components of the program can be omitted or placed in separate processors), each possible configuration of the software should be tested.
      If the software supports a variety of hardware configurations (e.g., different types of I/O devices, communication lines, memory sizes), then the software should be tested with each type of hardware device and with the minimum and maximum configuration.

    • Documentation Testing: Documentation testing is concerned with the accuracy of the user documentation. This involves
      i) Review of the user documentation for accuracy and clarity
      ii)Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the system

    • Facility Testing:Facility Testing is the determination of whether each facility (or functionality) mentioned in SRS is actually implemented. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished.

    • Installability Testing:Certain software systems will have complicated procedures for installing the system. For instance, the system generation (sysgen) process in IBM Mainframes. The testing of these installation procedures is part of System Testing.
      Proper Packaging of application, configuration of various third party software and database parameters settings are some issues important for easy installation.
      It may not be practical to devise test cases for certain reliability factors. For e.g., if a system has a downtime objective of two hours or less per forty years of operation, then there is no known way of testing this reliability factor.

    • Performance Testing: Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all phases testing. Even at the unit level, the performance of an individual module is assessed as white-box tests are conducted. However the performance testing is complete when all system elements are fully integrated and the true performance of the system is ascertained as per the customer requirements.

    • Performance Testing for Web Applications: The most realistic strategy for rolling out a Web application is to do so in phases. Performance testing must be an integral part of designing, building, and maintaining Web applications.
      i) Automated testing tools play a critical role in measuring, predicting, and controlling application performance. There is a paragraph on automated tools available for testing Web Applications at the end of this document.
      In the most basic terms, the final goal for any Web application set for high-volume use is for users to consistently have
      i) continuous availability
      ii) consistent response times�even during peak usage times. Performance testing has five manageable phases:
      iii) architecture validation
      iv) performance benchmarking
      v) performance regression
      vi) performance tuning and acceptance
      vii) and the continuous performance monitoring necessary to control performance and manage growth.

    • Procedure Testing: If the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested. These may include procedures to be followed by
      i) The human operator
      ii) Database administrator
      iii) Terminal user
      These procedures are to be tested as part of System testing.

    • Recovery Testing:Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialisation, check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the time required to repair is evaluated to determine whether it is within acceptable limits.

    • Reliability Testing:The various software-testing processes have the goal to test the software reliability. The "Reliability Testing" which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRS.
      However, if the reliability factors are stated as say, Mean-time-to-failure (MTTF) is 20 hours, it is possible to device test cases using mathematical models.

0 comments:

Post a Comment

Note: only a member of this blog may post a comment.