Thursday 29 October 2015

Risk analysis

Risk analysis

There are many techniques to analyze the testing. They are:
§  One technique for risk analysis is a close reading of the requirements specification, design specifications, user documentation and other items.
§  Another technique is brainstorming with many of the project stakeholders.
§  Another is a sequence of one-on-one or small-group sessions with the business and technology experts in the company.
§  Some people use all these techniques when they can. To us, a team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach, as team approaches draw on the knowledge, wisdom and insight of the entire team to determine what to test and how much.
The scales used to rate possibility and impact vary. Some people rate them high, medium and low. Some use a 1-10 scale. The problem with a 1-10 scale is that it’s often difficult to tell a 2 from a 3 or a 7 from an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high, medium, low and very low) tends to work well.
Let us also discuss some risks which occur usually along with some options for managing them:
§  Logistics or product quality problems that block tests: These can be made moderate by careful planning, good defect triage and management, and robust test design.
§  Test items that won’t install in the test environment: These can be mitigated through smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous integration. Having a defined uninstall process is a good contingency plan.
§  Excessive change to the product that invalidates test results or requires updates to test cases, expected results and environments: These can be mitigated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the risk by escalation to management is often in order.
§  Insufficient or unrealistic test environments that yield misleading results: One option is to transfer the risks to management by explaining the limits on test results obtained in limited environments. Mitigation – sometimes complete alleviation – can be achieved by outsourcing tests such as performance tests that are particularly sensitive to proper test environments.
Let us also go through some additional risks and perhaps ways to manage them:
§  Organizational issues such as shortages of people, skills or training, problems with communicating and responding to test results, bad expectations of what testing can achieve and complexity of the project team or organization.
§  Supplier issues such as problems with underlying platforms or hardware, failure to consider testing issues in the contract or failure to properly respond to the issues when they arise.
§  Technical issues related to ambiguous, conflicting or unprioritized requirements, an excessively large number of requirements given other project constraints, high system complexity and quality problems with the design, the code or the tests.
It is really very important to keep in mind that not all projects are subject to the same risks.
Finally, we should not forget that even test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area or that the test cases do not exercise the critical areas of the system.
By using a test plan template like the IEEE 829 template shown earlier, you can remind yourself to consider and manage risks during the planning phase. It is worth repeating at early stage of the project because risks at this point of time are educated guesses. Some of those guesses might be wrong. Make sure that you plan to re-assess and adjust your risks at regular intervals in the project and make appropriate course corrections to the testing or the project itself.

0 comments:

Post a Comment

Note: only a member of this blog may post a comment.