Thursday 29 October 2015

Incident in software testing

Incident in software testing

§  While executing a test, you might observe that the actual results vary from expected results. When the actual result is different from the expected result then it is called as incidents, bugs, defects, problems or issues.
§  To be specific, we sometimes make difference between incidents and the defects or bugs. An incident is basically any situation where the system exhibits questionable behavior, but often we refer to an incident as a defect only when the root cause is some problem in the item we are testing.
§  Other causes of incidents include misconfiguration or failure of the test environment, corrupted test data, bad tests, invalid expected results and tester mistakes.

 Incident reports in software testing


After logging the incidents that occur in the field or after deployment of the system we also need some way of reporting, tracking, and managing them.  It is most common to find defects reported against the code or the system itself. However, there are cases where defects are reported against requirements and design specifications, user and operator guides and tests also.

Risk analysis

Risk analysis

There are many techniques to analyze the testing. They are:
§  One technique for risk analysis is a close reading of the requirements specification, design specifications, user documentation and other items.
§  Another technique is brainstorming with many of the project stakeholders.
§  Another is a sequence of one-on-one or small-group sessions with the business and technology experts in the company.
§  Some people use all these techniques when they can. To us, a team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach, as team approaches draw on the knowledge, wisdom and insight of the entire team to determine what to test and how much.
The scales used to rate possibility and impact vary. Some people rate them high, medium and low. Some use a 1-10 scale. The problem with a 1-10 scale is that it’s often difficult to tell a 2 from a 3 or a 7 from an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high, medium, low and very low) tends to work well.
Let us also discuss some risks which occur usually along with some options for managing them:
§  Logistics or product quality problems that block tests: These can be made moderate by careful planning, good defect triage and management, and robust test design.
§  Test items that won’t install in the test environment: These can be mitigated through smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous integration. Having a defined uninstall process is a good contingency plan.
§  Excessive change to the product that invalidates test results or requires updates to test cases, expected results and environments: These can be mitigated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the risk by escalation to management is often in order.
§  Insufficient or unrealistic test environments that yield misleading results: One option is to transfer the risks to management by explaining the limits on test results obtained in limited environments. Mitigation – sometimes complete alleviation – can be achieved by outsourcing tests such as performance tests that are particularly sensitive to proper test environments.
Let us also go through some additional risks and perhaps ways to manage them:
§  Organizational issues such as shortages of people, skills or training, problems with communicating and responding to test results, bad expectations of what testing can achieve and complexity of the project team or organization.
§  Supplier issues such as problems with underlying platforms or hardware, failure to consider testing issues in the contract or failure to properly respond to the issues when they arise.
§  Technical issues related to ambiguous, conflicting or unprioritized requirements, an excessively large number of requirements given other project constraints, high system complexity and quality problems with the design, the code or the tests.
It is really very important to keep in mind that not all projects are subject to the same risks.
Finally, we should not forget that even test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area or that the test cases do not exercise the critical areas of the system.
By using a test plan template like the IEEE 829 template shown earlier, you can remind yourself to consider and manage risks during the planning phase. It is worth repeating at early stage of the project because risks at this point of time are educated guesses. Some of those guesses might be wrong. Make sure that you plan to re-assess and adjust your risks at regular intervals in the project and make appropriate course corrections to the testing or the project itself.

Risk based testing

Risk based testing

Risk-based testing is basically a testing done for the project based on risks. It uses risk to prioritize and emphasize the appropriate tests during test execution. Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system is deployed.
§  Risk-based testing starts early in the project, identifying risks to system quality and using that knowledge of risk to guide testing planning, specification, preparation and execution.
§  Risk-based testing involves both mitigation – testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects – and contingency – testing to identify work-arounds to make the defects that do get past us less painful.
§  Risk-based testing also involves measuring how well we are doing at finding and removing defects in critical areas.
§  Risk-based testing can also involve using risk analysis to identify proactive opportunities to remove or prevent defects through non-testing activities and to help us select which test activities to perform.

The goal of risk-based testing cannot practically be – a risk-free project. What we can get from risk-based testing is to carry out the testing with best practices in risk management to achieve a project outcome that balances risks with quality, features, budget and schedule.

Risk in software testing

Risk in software testing

In software testing Risksare the possible problems that might endanger the objectives of the project stakeholders. It is the possibility of a negative or undesirable outcome. A risk is something that has not happened yet and it may never happen; it is a potential problem.
In the future, a risk has some probability between 0% and 100%; it is a possibility, not a certainty.
The chance of a risk becoming an outcome is dependent on the level of risk associated with its possible negative consequences.
For example, most people are expected to catch a cold in the course of their lives, usually more than once. But the healthy individual suffers no serious consequences. Therefore, the overall level of risk associated with colds is low for this person. In other hand the risk of a cold for an elderly person with breathing difficulties would be high. So, in his case the overall level of risk associated with cold is high.
We can classify risks into following categories:
1.     Product risk (factors relating to what is produced by the work, i.e. the thing we are testing).
2.     Project risk (factors relating to the way the work is carried out, i.e. the test project)

Product risk in software testing

Product risk is the possibility that the system or software might fail to satisfy or fulfill some reasonable expectation of the customer, user, or stakeholder. (Some authors also called the ‘Product risks’ as ‘Quality risks’ as they are risks to the quality of the product.)
The product risks that can put the product or software in danger are:
§  If the software skips some key function that the customers specified, the users required or the stakeholders were promised.
§  If the software is unreliable and frequently fails to work.
§  If software fail in ways that cause financial or other damage to a user or the company that user works for.
§  If the software has problems related to a particular quality characteristic, which might not be functionality, but rather security, reliability, usability, maintainability or performance.
Two quick tips about product risk analysis:
First, remember to consider both likelihood of occurrence of the risk and the impact of the risk. While you may feel proud by finding lots of defects but testing is also about building confidence in key functions. We need to test the things that probably won’t break but would be very bad if they did.
Second, early risk analysis, are often educated guesses. At key project milestones it’s important to ensure that you revisit and follow up on the risk analysis.

 Project risk in software testing

Testing is an activity like the rest of the project and thus it is subject to risks that cause danger to the project.
The project risk that can endanger the project are:
§  Risk such as the late delivery of the test items to the test team or availability issues with the test environment.
§  There are also indirect risks such as excessive delays in repairing defects found in testing or problems with getting professional system administration support for the test environment.
 For any risk, project risk or product risk we have four typical actions that we can take:
§  Mitigate: Take steps in advance to reduce the possibility and impact of the risk.
§  Contingency: Have a plan in place to reduce the possibility of the risk to become an outcome.
§  Transfer: Convince some other member of the team or project stakeholder to reduce the probability or accept the impact of the risk.
§  Ignore: Ignore the risk, which is usually a good option only when there is little that can be done or when the possibility and impact of that risk are low in the project.


Risk in software testing

Risk in software testing

In software testing Risksare the possible problems that might endanger the objectives of the project stakeholders. It is the possibility of a negative or undesirable outcome. A risk is something that has not happened yet and it may never happen; it is a potential problem.
In the future, a risk has some probability between 0% and 100%; it is a possibility, not a certainty.
The chance of a risk becoming an outcome is dependent on the level of risk associated with its possible negative consequences.
For example, most people are expected to catch a cold in the course of their lives, usually more than once. But the healthy individual suffers no serious consequences. Therefore, the overall level of risk associated with colds is low for this person. In other hand the risk of a cold for an elderly person with breathing difficulties would be high. So, in his case the overall level of risk associated with cold is high.
We can classify risks into following categories:
1.     Product risk (factors relating to what is produced by the work, i.e. the thing we are testing).
2.     Project risk (factors relating to the way the work is carried out, i.e. the test project)

Product risk in software testing

Product risk is the possibility that the system or software might fail to satisfy or fulfill some reasonable expectation of the customer, user, or stakeholder. (Some authors also called the ‘Product risks’ as ‘Quality risks’ as they are risks to the quality of the product.)
The product risks that can put the product or software in danger are:
§  If the software skips some key function that the customers specified, the users required or the stakeholders were promised.
§  If the software is unreliable and frequently fails to work.
§  If software fail in ways that cause financial or other damage to a user or the company that user works for.
§  If the software has problems related to a particular quality characteristic, which might not be functionality, but rather security, reliability, usability, maintainability or performance.
Two quick tips about product risk analysis:
First, remember to consider both likelihood of occurrence of the risk and the impact of the risk. While you may feel proud by finding lots of defects but testing is also about building confidence in key functions. We need to test the things that probably won’t break but would be very bad if they did.
Second, early risk analysis, are often educated guesses. At key project milestones it’s important to ensure that you revisit and follow up on the risk analysis.

 Project risk in software testing

Testing is an activity like the rest of the project and thus it is subject to risks that cause danger to the project.
The project risk that can endanger the project are:
§  Risk such as the late delivery of the test items to the test team or availability issues with the test environment.
§  There are also indirect risks such as excessive delays in repairing defects found in testing or problems with getting professional system administration support for the test environment.
 For any risk, project risk or product risk we have four typical actions that we can take:
§  Mitigate: Take steps in advance to reduce the possibility and impact of the risk.
§  Contingency: Have a plan in place to reduce the possibility of the risk to become an outcome.
§  Transfer: Convince some other member of the team or project stakeholder to reduce the probability or accept the impact of the risk.
§  Ignore: Ignore the risk, which is usually a good option only when there is little that can be done or when the possibility and impact of that risk are low in the project.


Purpose and importance of test plans in software testing

Purpose and importance of test plans in software testing

Test plan is the project plan for the testing work to be done. It is not a test design specification, a collection of test cases or a set of test procedures; in fact, most of our test plans do not address that level of detail. Many people have different definitions for test plans.
Why it is required to write test plans? We have three main reasons to write the test plans:
First, by writing a test plan it guides our thinking.  Writing a test plan forces us to confront the challenges that await us
And  focus our thinking on important topics.
By using a template for writing test plans helps us remember the important challenges. You can use the IEEE 829 test plan template shown in this chapter, use someone else’s template, or create your own template over time.
Second, the test planning process and the plan itself serve as the means of communication with other members of the project team, testers, peers, managers and other stakeholders. This communication allows the test plan to influence the project team and the project team to influence the test plan, especially in the areas of organization-wide testing policies and motivations; test scope, objectives and critical areas to test; project and product risks, resource considerations and constraints; and the testability of the item under test. We can complete this communication by circulating one or two test plan drafts and through review meetings. Such a draft will include many notes such as the following:
Third, the test plan helps us to manage change. During early phases of the project, as we gather more information, we revise our plans At times it is better to write multiple test plans in some situations. For example, when we manage both integration and system test levels, those two test execution periods occur at different points in time and have different objectives. For some systems projects, a hardware test plan and a software test plan will address different techniques and tools as well as different audiences. However, there are chances that these test plans can get overlapped, hence, a master test plan should be made that addresses the common elements of both the test plans can reduce the amount of redundant documentation.

IEEE 829 STANDARD TEST PLAN TEMPLATE
Test plan identifier 
Test deliverables
Introduction 
Test tasks
Test items 
Environmental needs
Features to be tested 
Responsibilities
Features not to be tested 
Staffing and training needs
Approach Schedule
Item pass/fail criteria
Risks and contingencies
Suspension and resumption criteria Approvals

Things to keep in mind while planning tests

A good test plan is always kept short and focused. At a high level, you need to consider the purpose served by the testing work. Hence, it is really very important to keep the following things in mind while planning tests:
§  What is in scope and what is out of scope for this testing effort?
§  What are the test objectives?
§  What are the important project and product risks? (details on risks will discuss in Section 5.5).
§  What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
§  What is most critical for this product and project?
§  Which aspects of the product are more (or less) testable?
§  What should be the overall test execution schedule and how should we decide the order in which to run specific tests? (Product and planning risks, discussed later in this chapter, will influence the answers to these questions.)
§  How to split the testing work into various levels (e.g., component, integration, system and acceptance).
§  If that decision has already been made, you need to decide how to best fit your testing work in the level you are responsible for with the testing work done in those other test levels.
§  During the analysis and design of tests, you’ll want to reduce gaps and overlap between levels and, during test execution, you’ll want to coordinate between the levels. Such details dealing with inter-level coordination are often addressed in the master test plan.
§  In addition to integrating and coordinating between test levels, you should also plan to integrate and coordinate all the testing work to be done with the rest of the project. For example, what items must be acquired for the testing?
§  When will the programmers complete work on the system under test?
§  What operations support is required for the test environment?
§  What kind of information must be delivered to the maintenance team at the end of testing?
§  How many resources are required to carry out the work.
Now, think about what would be true about the project when the project was ready to start executing tests. What would be true about the project when the project was ready to declare test execution done? At what point can you safely start a particular test level or phase, test suite or test target? When can you finish it? The factors to consider in such decisions are often called ‘entry criteria’ and ‘exit criteria.’ For such criteria, typical factors are:

§  Acquisition and supply: the availability of staff, tools, systems and other materials required.
§  Test items: the state that the items to be tested must be in to start and to finish testing.
§  Defects: the number known to be present, the arrival rate, the number predicted to remain, and the number resolved.
§  Tests: the number run, passed, failed, blocked, skipped, and so forth.
§  Coverage: the portions of the test basis, the software code or both that have been tested and which have not.
§  Quality: the status of the important quality characteristics for the system.
§  Money: the cost of finding the next defect in the current level of testing compared to the cost of finding it in the next level of testing (or in production).
§  Risk: the undesirable outcomes that could result from shipping too early (such as latent defects or untested areas) – or too late (such as loss of market share).

When writing exit criteria, we try to remember that a successful project is a balance of quality, budget, schedule and feature considerations. This is even more important when applying exit criteria at the end of the project.

Roles and responsibilities of a Tester?

Roles and responsibilities of a Tester?

§  In the planning and preparation phases of the testing, testers should review and contribute to test plans, as well as analyzing, reviewing and assessing requirements and design specifications. They may be involved in or even be the primary people identifying test conditions and creating test designs, test cases, test procedure specifications and test data, and may automate or help to automate the tests.
§  They often set up the test environments or assist system administration and network management staff in doing so.
§  As test execution begins, the number of testers often increases, starting with the work required to implement tests in the test environment.
§  Testers execute and log the tests, evaluate the results and document problems found.
§  They monitor the testing and the test environment, often using tools for this task, and often gather performance metrics.

§  Throughout the testing life cycle, they review each other’s work, including test specifications, defect reports and test results.