Sunday, 30 August 2015

Project Management

Project Management: phases of software project

Answer The different phases of software projects are follows:
Initiation

Preparing scope document, requirement documents, Estimation chart, project plan and proposal documents

Planning

Designing test plan and coding standards.

Execution

Writing code 

Controlling

Testing and establishing change management procedure

Closure

Closing project and move it to maintenance mode

WHAT IS FUNCTIONAL TESTING

What is functional testing?


Functional testing is a segment of security testing. The security mechanisms of the system are tested, under operational conditions, for correct operation.
(OR)
Functional testing verifies that the end user gets what he wants from the application. It involves testing to ensure that the tasks or the steps required completing functionality works well. Functional testing involves testing of functional requirements as per the specification.

DIFFERENCE BETWEEN MANUAL AND AUTOMATION TESTING

Difference Between Manual Testing and Automation Testing

Manual testing will be used when the test case only needs to runs once or twice.Automation testing will be used when need to execute the set of test cases tests repeatedly.
Manual testing will be very useful while executing test cases first time & may or may not be powerful to catch the regression defects under frequently changing requirements.Automation testing will be very useful to catch regressions in a timely manner when the code is frequently changes.
Manual testing is less reliable while executing test cases every time. Using manual software testing it may not be perform test cases execution with same precision.Automation tests will help to perform same operation precisely each time.
Simultaneously testing on different machine with different OS platform combination is not possible using manual testing. To execute such task separate testers are required.Automation testing will be carried out simultaneously on different machine with different OS platform combination.
To execute the test cases every time tester required same amount of time.Once Automation test suites are ready then less testers are required to execute the test cases.
No programming can be done to write sophisticated tests which fetch hidden information.Using Automation testing, Testers can program complicated tests to bring out of sight information.
Manual testing is slower than automation. Running tests manually can be very time consuming. Automation runs test cases significantly faster than human resources.
Manual testing requires less cost than automating it.Initial cost to automate is more than manual testing but can be used repetitively.
It is preferable to execute UI test cases using manual testing.Sometimes can’t automate the UI test cases using automation testing.
To execute the Build Verification Testing (BVT) is very mundane and tiresome in Manual testing.Automation testing is very useful for automating the Build Verification Testing (BVT) & it is not mundane and tiresome.

Software Testing-Testing Requirements

Software Testing-Testing Requirements

  • Operability in Software Testing: 
    1. The better the software works, the more efficiently it can be tested.
    2. The system has few bugs (bugs add analysis and reporting overhead to the test process)
    3. No bugs block the execution of tests.
    4. The product evolves in functional stages (allows simultaneous development & testing)
  • Observability in Software Testing:
    1. What is seen is what is tested
    2. Distinct output is generated for each input
    3. System states and variables are visible or queriable during execution
    4. Past system states and variables are visible or queriable (eg., transaction logs)
    5. All factors affecting the output are visible
    6. Incorrect output is easily identified
    7. Incorrect input is easily identified
    8. Internal errors are automatically detected through self-testing mechanism
    9. Internally errors are automatically reported
    10. Source code is accessible
  • Controllability in Software Testing:
    1. The better the software is controlled, the more the testing can be automated and optimised.
    2. All possible outputs can be generated through some combination of input in Software Testing
    3. All code is executable through some combination of input in Software Testing
    4. Software and hardware states can be controlled directly by testing
    5. Input and output formats are consistent and structured in Software Testing
    6. Tests can be conveniently specified, automated, and reproduced.
  • Decomposability in Software Testing:
    1. By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be performed.
    2. The software system is built from independent modules
    3. Software modules can be tested independently in Software Testing
  • Simplicity in Software Testing:
    1. The less there is to test, the more quickly it can be tested in Software Testing
    2. Functional simplicity
    3. Structural simplicity
    4. Code simplicity
  • Stability in Software Testing:
    1. The fewer the changes, the fewer the disruptions to testing
    2. Changes to the software are infrequent
    3. Changes to the software are controlled in Software Testing
    4. Changes to the software do not invalidate existing tests in Software Testing
    5. The software recovers well from failures in Software Testing
  • Understandability in Software Testing:
    1. The more information we have, the smarter we will test
    2. The design is well understood in Software Testing
    3. Dependencies between internal external and shared components are well understood.
    4. Changes to the design are communicated.
    5. Technical documentation is instantly accessible
    6. Technical documentation is well organized in Software Testing
    7. Technical documentation is specific and detailed
    8. Technical documentation is accurate

SOFTWARE TESTING LIFE CYCLE

Software Testing-Testing Life Cycles

Before going to the Testing Life Cycle in Software Testing, 
we need to know how the software is developed and its Life Cycle in Software Testing. 
The Software Development Life Cycle (SDLC) is also called as 
the Product Development Life Cycle (PDLC) in Software Testing. 

The SDLC in Software Testing has 6 phases:- 
They are:-
a)Initial Phase in Software Testing 
b)Analysis Phase in Software Testing
c)Design Phase in Software Testing 
d)Coding Phase in Software Testing 
e)Testing Phase in Software Testing 
f)Delivery & Maintenance Phase in Software Testing 

Now let Us discuss each phase in detail:--
a)Initial Phase in Software Testing:-
(i)Gathering the Requirements:-
The Business Analyst(BA) will gather the information of the company through one template which is predefined and goes to client.
He would collect all the information of what has to be developed and in how many days and all the basic requirements of the company.
For the proof of his collection of information the Business Analyst(BA) would prepare one document called as either 
BDD in Software Testing:--Business Development Design 
BRS in Software Testing:--Business Requirement Specification
URS in Software Testing:--User Requirement Specification
CRS in Software Testing:--Customer Requirement Specification
All are the same.

(ii)Discussing the Financial Terms and Conditions:--
The Engagement Manager(EM) would discuss all the Financial Matters.
b)Analysis Phase in Software Testing:-
In this phase the BDD document is taken as the input.
In this phase 4 steps are done.
(i)Analyse the requirements in Software Testing:-In this step all the requirements are analysed and studyed.
(ii)Feasibility Study in Software Testing:-Feasibility means the possibility of the project developing.
(iii)Deciding Technology in Software Testing:-Deciding which techonology has to be used for example:
Either to use the SUN or Microsoft Technology etc:
(iv)Estimation in Software Testing:-Estimating the resources.for example:-Time,Number of people etc:
During the Analysis Phase the Project Manager prepares the PROJECT PLAN.
The output document for this phase is the Software Requirements Specification(SRS).  


c)Design Phase in Software Testing:-
The designing will be in 2 levels:-
(i)High Level Designing in Software Testing:-In this level of Designing the project is divided into number of modules.
The High Level Designing is done by Technical Manager(TM) or Chief Architect(CA).
(ii)Low Level Designing in Software Testing:-In this level of Designing the modules are further divided into number of submodules.
d)Coding Phase in Software Testing:-
In this Phase the Developers would write the Programs for the Project by following the Coding standards.
In this phase the Developers would prepare the Source Code.Go to Top

e)Testing Phase in Software Testing:-
1.In the first phase when the BDD is prepared the Test Engineer would study the document, and send a Review Report to Business Analyst(BA).
2.Review Report in Software Testing is nothing but a document prepared by the Test Engineer while studying the BDD document and the points which he cannot understand and not clear are written in that report and sent to BA.
3.And the Test Engineer would write the Test Cases of application.
4.In Manual Testing there would be upto 50% defect free and in Automation Testing it would be 93% defect free.
5.In this Phase Testing people document called as Defect Profile DocumentGo to Top

f)Delivery & Maintenance Phase in Software Testing:-
1.In this Phase after the project is done a mailis given to the client mentioning the completing of the project.
2.This is called as the Software Delivery Note.
3.The project is tested by client and is called as the User Acceptance Testing.
4.The project is installed in client environment and there testing is done called Port Testing in Software Testing
while installing if any problem occurrs the maintenance people would write the Deployment Document(DD) to Project Manager(PM).
5.After some time if client want to have some changes in the software and software changes are done by Maintenance.

SOFTWARE TESTING METHODS

Software Testing -Testing Methods

Software Testing can be performed in either the two types:-
1.Conventional:-In this Testing is started after the Coding.
2.UnConventional:-In this Testing is done from the Initial Phase.

Test case design for software testing is as important as the design of the software itself. All test cases shall be designed to find the maximum errors through their execution
Testing methodologies are used for designing test cases. These methodologies provide the developer with a systematic approach for testing. 
Any software product can be tested in one of the two ways:
1) Knowing the specific function the product has been designed to perform, tests can be planned and conducted to demonstrate that each function is fully operational, and to find and correct the errors in it.
2) Knowing the internal working of a product, tests can be conducted to ensure that the internal operation performs according to specification and all internal components are being adequately exercised and in the process, errors if any are eliminated. 
The first test approach is called a)Black-box testingand the second is called b)White-box testing

The attributes of both black-box and white-box testing can be combined to provide an approach that validates the software interface and also selectively assures that internal structures of software are correct.
The black-box and white-box testing methods are applicable across all environments, architectures and applications but unique guidelines and approaches to testing are warranted in some cases. This document covers Testing GUIs, and Client/Server Architectures.

The testing methodologies applicable to test case design in different testing phases are as given below:
----------------------------------------------------------------------------
Types of Testing--------White-box Testing--------Black Box Testing
----------------------------------------------------------------------------
Unit Testing --------------------- Yes
-----------------------------------------------------------------------------
Integration Testing---------------Yes---------------------- Yes
-----------------------------------------------------------------------------
System Testing------------------------------------------------ Yes 
-----------------------------------------------------------------------------
Acceptance Testing --------------------------------------------Yes

AGILE SCRUM METHODOLOGY

SCRUM Framework

In today’s date SCRUM is considered as practical & more valued Agile methodology. It is easy to use & deliver incrementally high quality software on time & budget. In this article we will quickly take a look at what all steps involved in SCRUM framework like What is Product VisionProduct BacklogSprint BacklogDaily SCRUMA Sprint Burndown and Sprint Retrospective meeting etc.

Project Vision:

The goal of the project vision is to align the team around a central purpose. It is very important for the Agile SCRUM team to know what they are aiming for. Based on vision Product Owner creates ordered prioritized wish list. Requirements are broken down into User Stories by the Product Owner.

Product Backlog:

The Product Backlog is a ordered & prioritized list of item that all need to include in the product. It is dynamic & during the project items may added or deleted from this list. All items are ordered prioritized in this list. The highest priority items are completed first.
SCRUM Framework
SCRUM Framework

Sprint Backlog:

In the Sprint planning meeting the team picks list of User stories from Product Backlog. These selected items moved from Product backlog to Sprint backlog. All sprint backlog user stories are discussed items from the product backlog and team member committed to complete the assigned task within Sprint Timeboxed. Each user story is divided into smaller detailed tasks. In Sprint team work together collaboratively to complete Sprint tasks.

Daily SCRUM:

The Daily SCRUM is not used as a problem-solving or issue resolution meeting. In the Daily SCRUM each of the team members should answer three questions:
  • What did you do yesterday?
  • What will you do today?
  • Are there any obstacles in your way?
In the Daily SCRUM team share the conflicts, obstacle or impediment faced in their tasks & any possible solutions on that. On daily basis this meeting holds on same time, same location hold by Agile SCRUM team. Ideally “The Daily SCRUM” is conducted in the morning which helps to plan task for whole day. As Agile process & Sprint is time-boxed, similarly Daily SCRUM meeting should be time-boxed to 15 minutes max. In this meeting discussion should be quick and relevant. The SCRUMMaster always helps to maintain the focus of team to its Sprint goal.

A Sprint Burndown:

Sprint Burndown measures remaining Sprint Backlog items across the time of a Sprint. It is very effective visual indicator to get correlation between work remaining at any point of time & the actual progress of team. Prior to Daily SCRUM meeting, Sprint Burndown chart should be updated, so it will help to understand the progress of Sprint daily & makes any adjustments if needed.

Shippable Product:

The end of the Sprint can turn into an increment of potentially shippable functionality hand over to customer. This shippable functionality should be well-structured, well-written code, thoroughly tested and user operation of the functionality is documented. At the end of the Sprint  features committed in Sprint are demonstrated to all stakeholders & they provide the valuable feedback to moving product in correct direction.

Sprint Retrospective:

At the end of each Sprint review and Retrospective meeting should be conducted to know what went good & bad in Sprint. Participants for this meeting is Team, SCRUMMaster & Product Owner(Listner). This meeting also timeboxed to 2-3 hours. Using this approach each team member is asked to identify specific things that the team should:
  • Start doing
  • Stop doing
  • Continue doing
Iterations are a key feature of the SCRUM process. In the next Sprint again team choose the chunk of User stories from the Product backlog & Sprint cycle started with new Sprint goals again. These cycles are continue doing unless and until Product backlog is finished or Deadline reaches or budget is used up. Agile SCRUM is ensure that all high priority task ordered top in the Product backlog so those can get completed first before the project ends.

EQUIVALENCE VALUE PARTITIONING

Equivalence Class Partitioning

WHAT IS EQUIVALENCE PARTITIONING?
Concepts:   Equivalence partitioning is a method for deriving test cases.  In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. 
In this method, the tester identifies various equivalence classes for partitioning.  A class is a set of input conditions that are is likely to be handled the same way by the system.  If the system were to handle one case in the class erroneously, it would handle all cases erroneously. 
WHY LEARN EQUIVALENCE PARTITIONING?
Equivalence partitioning significantly reduces the number of test cases required to test a system reasonably.  It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases. 
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING
To use equivalence partitioning, you will need to perform two steps 
1. Identify the equivalence classes 
2. Design test cases 
STEP 1: IDENTIFY EQUIVALENCE CLASSES
Take each input condition described in the specification and derive at least two equivalence classes for it.  One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class). 
Following are some general guidelines for identifying equivalence classes: 
a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high.  
For example, if an item in inventory (numeric field) can have a quantity of +1 to +999, identify the following classes:   
1. One valid class:   (QTY is greater than or equal to +1 and is less than or equal to +999).   This is written as (+1 < = QTY < = 999) 
2. The invalid class (QTY is less than 1), also written as (QTY < 1) i.e. 0, -1, 2, … so on
3. The invalid class (QTY is greater than 999), also written as (QTY >999) i.e. 1000, 1001, 1002, 1003… so on.
Invalid classValid ClassInvalid class
011000
-121001
-231002
-341003
-4… So on5… up to 9991004.. So on
b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are too many inputs. 
For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product.  The equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and less than or equal to 4, also written as (1   < = no. of purchase orders < =   4) the invalid class  (no.  of purchase orders> 4) the invalid class  (no.  of purchase orders <  1) 
c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set.  
Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer: 
- One partition: number of inputs 
- Classes “x<4”, “4<=x<=24”, “24<x” 
- Chosen values: 3,4,5,14,23,24,25

V-MODEL

Software Testing-Testing Models-V Model

V-model
The V Model, while admittedly obscure, gives equal weight to testing rather than treating it as an afterthought.

Initially defined by the late Paul Rook in the late 1980s, the V was included in the U.K.'s National Computing Centre publications in the 1990s with the aim of improving the efficiency and effectiveness of software development. It's accepted in Europe and the U.K. as a superior alternative to the waterfall model; yet in the U.S., the V Model is often mistaken for the waterfall.

The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side.
Several testing strategies are available and lead to the following generic characteristics:
1) Testing begins at the unit level and works "outward" toward the integration of the entire system
2) Different testing techniques are appropriate at different points of S/W development cycle.
Testing is divided into four phases as follows:
a)Unit Testing
b)Integration Testing
c)Regression Testing
d)System Testing
e)Acceptance Testing
The context of Unit and Integration testing changes significantly in the Object Oriented (OO) projects. Class Integration testing based on sequence diagrams, state-transition diagrams, class specifications and collaboration diagrams forms the unit and Integration testing phase for OO projects. For Web Applications, Class integration testing identifies the integration of classes to implement certain functionality.

The meaning of system testing and acceptance testing however remains the same in the OO and Web based Applications context also. The test case design for system and acceptance testing however need to handle the OO specific intricacies.


Relation Between Development and Testing Phases
Testing is planned right from the URD stage of the SDLC. The following table indicates the planning of testing at respective stages. For projects of tailored SDLC, the testing activities are also tailored according to the requirements and applicability.


The "V" Diagram indicating this relationship is as follows

v-model
DRE: - Where A defects found by testing team. B defects found by customer side people during maintenance.
Refinement from v- model.
To decrease cost and time complexity in development process, small scale and medium scale companies are following a refinement form of VModel.

v1-model
Software Testing Phases
1. Unit Testing
As per the "V" diagram of SDLC, testing begins with Unit testing. Unit testing makes heavy use of White Box testing techniques, exercising specific paths in a unit�s control structure to ensure complete coverage and maximum error detection.
Unit testing focuses verification effort on the smallest unit of software design - the unit. The units are identified at the detailed design phase of the software development life cycle, and the unit testing can be conducted parallel for multiple units. Five aspects are tested under Unit testing considerations:
  • The module interface is tested to ensure that information properly flows into and out of the program unit under test. 
  • The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm�s execution.

  • Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

  • All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once.

  • And finally, all error-handling paths are tested.


  • Unit Test Coverage Goals:
    Path Coverage:
    Path coverage technique is to verify whether each of the possible paths in each of the functions has executed properly. A path is a set of branches of possible flow. Since loop introduces unbounded number of paths, the path coverage technique employs a test that considers only a limited number of looping possibilities.

    Statement Coverage
    The statement coverage technique requires that every statement in the program to be evoked at least at once. It verifies coverage at high level rather than decision execution or Boolean expressions. The advantage is this measure can be applied directly to object code & does not require processing source code.

    Decision (Logic/Branch) Coverage
    The decision coverage test technique seeks to identify the percentage of all possible decision outcomes that have been considered by a suite of test procedures. It requires that every point of entry & exit in the software program be invoked at least once. It also requires that all possible conditions for a decision in the program be exercised at least once.

    Condition Coverage
    This technique seeks to verify the accuracy of true or false outcome of each Boolean sub expression. This technique employs tests that measure the sub expressions independently.

    Multiple-Condition Coverage:
    It takes care of covering different conditions, which are interrelated.
    Unit Testing (COM/DCOM Technology):
    The integral parts covered under unit testing will be:
    Active Server Page (ASP) that invokes the ATL component (which in turn can use C++ classes) The actual component Interaction of the component with the persistent store or database and Database tables Driver for the unit testing of a unit belonging to a particular component or subsystem depends on the component alone. Wherever User Interface is available UI called from a web browser will initiate the testing process. If UI is not available then appropriate drivers (code in C++ as an example) will be developed for testing.

    Unit testing would also include testing inter-unit functionality within a component. This will consist of two different units belonging to same component interacting with each other. The functionality of such units will be tested with separate unit test(s).
    Each unit of functionality will be tested for the following considerations:
    Type: Type validation that takes into account things such as a field expecting alphanumeric characters should not allow user input of anything other than that.
    Presence: This validation ensures all mandatory fields should be present, they should also be mandated by database by making the column NOT NULL (this can be verified from the low-level design document).
    Size: This validation ensures the size limit for a float or variable character string input from the user not to exceed the size allowed by the database for the respective column.
    Validation: This is for any other business validation that should be applied to a specific field or for a field that is dependent on another field. (E.g.: Range validation � Body temperature should not exceed 106 degree Celsius), duplicate check etc.
    GUI based: In case the unit is UI based, GUI related consistency check like font sizes, background color, window sizes, message & error boxes will be checked.
    2. Integration Testing
    After unit testing, modules shall be assembled or integrated to form the complete software package as indicated by the high level design. Integration testing is a systematic technique for verifying the software structure and sequence of execution while conducting tests to uncover errors associated with interfacing.
    Black-box test case design techniques are the most prevalent during integration, although limited amount of white box testing may be used to ensure coverage of major control paths. Integration testing is sub-divided as follows:
    i) Top-Down Integration Testing: Top-Down integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
    ii) Bottom-Up Integration Testing:Bottom-Up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Since modules are integrated from the bottom up, processing required for modules sub-ordinate to a given level is always available and the need for stubs is eliminated.
    iii)Integration Testing for OO projects:
    Thread Based Testing Thread based testing follows an execution thread through objects to ensure that classes collaborate correctly.
    In thread based testing
    • Set of class required to respond to one input or event for system are identified;
    • each thread is integrated and tested individually
    • Regression test is applied to ensure that no side effects occur
    Use Based Testing
    Use based testing evaluates the system in layers. The common practice is to employ the use cases to drive the validation process
    In Use Based Testing
    • Initially independent classes (i.e., classes that use very few other classes) are integrated and tested.
    • Followed by the dependent classes that use independent classes. Here dependent classes with a layered approach are used
    • Followed by testing next layer of (dependent) classes that use the independent classes
    This sequence is repeated by adding and testing next layer of dependent classes until entire system is tested.
    Integration Testing for Web applications:
    Collaboration diagrams, screens and report layouts are matched to OOAD and associated class integration test case report is generated.
    3. Regression Testing
    Each time a new module is added as part of integration testing, new data flow paths may be established, new I/O may occur, and new control logic may be invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
    Regression testing may be conducted manually, by re-executing a subset of all test cases. The regression test suite (the subset of tests to be executed) contains three different classes of test cases: 
    • A representative sample of tests that will exercise all software functions.
    • Additional tests that focus on software functions and are likely to be affected by the change.
    • Tests that focus on the software components that have been changed. 
    As integration testing proceeds the number of regression tests can grow quite large. Therefore, the regression test suite shall be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred.
    4. System Testing
    After the software has been integrated (constructed), sets of high order tests shall be conducted. System testing verifies that all elements mesh properly and the overall system function/performance is achieved.
    The purpose of system testing is to fully exercise the computer-based system. The aim is to verify that all system elements and validate conformance against SRS. System testing is categorized into the following 20 types. The type(s) of testing shall be chosen depending on the customer / system requirements.
    Different types of Tests that comes under System Testing are listed below:
    • Compatibility / Conversion Testing: In cases where the software developed is a plug-in into an existing system, the compatibility of the developed software with the existing system has to be tested. Likewise, the conversion procedures from the existing system to the new software are to be tested.

    • Configuration Testing: Configuration testing includes either or both of the following:
      • testing the software with the different possible hardware configurations
      • testing each possible configuration of the software

      If the software itself can be configured (e.g., components of the program can be omitted or placed in separate processors), each possible configuration of the software should be tested.
      If the software supports a variety of hardware configurations (e.g., different types of I/O devices, communication lines, memory sizes), then the software should be tested with each type of hardware device and with the minimum and maximum configuration.

    • Documentation Testing: Documentation testing is concerned with the accuracy of the user documentation. This involves
      i) Review of the user documentation for accuracy and clarity
      ii)Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the system

    • Facility Testing:Facility Testing is the determination of whether each facility (or functionality) mentioned in SRS is actually implemented. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished.

    • Installability Testing:Certain software systems will have complicated procedures for installing the system. For instance, the system generation (sysgen) process in IBM Mainframes. The testing of these installation procedures is part of System Testing.
      Proper Packaging of application, configuration of various third party software and database parameters settings are some issues important for easy installation.
      It may not be practical to devise test cases for certain reliability factors. For e.g., if a system has a downtime objective of two hours or less per forty years of operation, then there is no known way of testing this reliability factor.

    • Performance Testing: Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all phases testing. Even at the unit level, the performance of an individual module is assessed as white-box tests are conducted. However the performance testing is complete when all system elements are fully integrated and the true performance of the system is ascertained as per the customer requirements.

    • Performance Testing for Web Applications: The most realistic strategy for rolling out a Web application is to do so in phases. Performance testing must be an integral part of designing, building, and maintaining Web applications.
      i) Automated testing tools play a critical role in measuring, predicting, and controlling application performance. There is a paragraph on automated tools available for testing Web Applications at the end of this document.
      In the most basic terms, the final goal for any Web application set for high-volume use is for users to consistently have
      i) continuous availability
      ii) consistent response times�even during peak usage times. Performance testing has five manageable phases:
      iii) architecture validation
      iv) performance benchmarking
      v) performance regression
      vi) performance tuning and acceptance
      vii) and the continuous performance monitoring necessary to control performance and manage growth.

    • Procedure Testing: If the software forms a part of a large and not completely automated system, the interfaces of the developed software with the other components in the larger system shall be tested. These may include procedures to be followed by
      i) The human operator
      ii) Database administrator
      iii) Terminal user
      These procedures are to be tested as part of System testing.

    • Recovery Testing:Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialisation, check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the time required to repair is evaluated to determine whether it is within acceptable limits.

    • Reliability Testing:The various software-testing processes have the goal to test the software reliability. The "Reliability Testing" which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRS.
      However, if the reliability factors are stated as say, Mean-time-to-failure (MTTF) is 20 hours, it is possible to device test cases using mathematical models.

HOW AND WHEN TESTING STARTS

How and When Testing Starts:

Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverable and Versions. 

The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V – Model for their development and testing.
After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design. Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases. 

After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. 

After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. 

And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product