Wednesday, June 16, 2010

what is Retesting and Regression Testing



Regression testing

Throughout all testing cycles, regression test cases are run. Regression testing is
selective retesting of a system or component to verify that modifications have not caused
unintended effects and that the system or component still complies with its specified
requirements.
Regression tests are a subset of the original set of test cases. These
test cases are re-run often, after any significant changes (bug fixes or enhancements) are
made to the code. The purpose of running the regression test case is to make a “spot
check” to examine whether the new code works properly and has not damaged any
previously-working functionality by propagating unintended side effects.

Retesting


Retesting is testing of system or component to verify that modifications of the system or component still complies with its specified requirements.

Re-testing is performed after any modification to verify if the new code is working correctly. In re-testing tester re-execute the testcases on same application build with diferent inputs or testdata.

What are Exit Criteria?



The testing strategy or high-level test plan document is quite often used as the exit criteria. The exit criteria would also include other details such as how well that tests needs to have been done. For example it may state that you need to have tested everything and have found no serious errors but minor errors are acceptable.

Testing is an iterative process and as it is impossible to test everything we could just keep going round the loop indefinitely.

If the exit criteria has been stated in advance then testing can stop when the exit criteria has been met. In other words we can thus say that the exit criteria are the criteria to be met in other to stop testing.

The exit criteria will therefore depend on the following: -

· The risk to the business process of the project.

· The time constraints within the project.

· The resource constraints within the project.

· The budget of the project.

What is root cause analysis?



Root cause analysis (RCA) is a class of problem solving methods aimed at identifying the root causes of problems or events. The practice of RCA is predicated on the belief that problems are best solved by attempting to correct or eliminate root causes, as opposed to merely addressing the immediately obvious symptoms. By directing corrective measures at root causes, it is hoped that the likelihood of problem recurrence will be minimized. However, it is recognized that complete prevention of recurrence by a single intervention is not always possible. Thus, RCA is often considered to be an iterative process, and is frequently viewed as a tool of continuous improvement.

Tuesday, June 15, 2010

V-Model

The V-model was developed to address some of the problems experienced using the traditional waterfall approach. Defects were being found too late in the life cycle, as testing was not involved until the end of the project. Testing also added lead time due to its late involvement. The V-model provides guidance that testing needs to begin as early as possible in the life cycle, which, as we've seen in Chapter 1, is one of the fundamental principles of structured testing. It also shows that testing is not only an execution based activity. There are a variety of activities that need to be performed before the end of the coding phase. These activities should be carried out in parallel with development activities, and testers need to work with developers and business analysts so they can perform these activities and tasks and produce a set of test deliverables. The work products produced by the developers and business analysts during development are the basis of testing in one or more levels. By starting test design early, defects are often found in the test basis documents. A good practice is to have testers involved even earlier, during the review of the (draft) test basis documents.

The V-model is a model that illustrates how testing activities (verification and validation) can be integrated into each phase of the life cycle. Within the V-model, validation testing takes place especially during the early stages, e.g. reviewing the user requirements, and late in the life cycle, e.g. during user acceptance testing. Although variants of the V-model exist, a common type of V-model uses four test levels.

The four test levels used, each with their own objectives, are:
• Component testing: searches for defects in and verifies the functioning of software components (e.g. modules, programs, objects, classes etc.) that are separately testable;
• Integration testing: A test interfaces between components, interactions to different parts of a system such as an operating system, file system and hard ware or interfaces between systems;
• System testing: concerned with the behavior of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements;
• Acceptance testing: validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system.

In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing and system integration testing after system testing. Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For the integration of a commercial off-the-shelf
(COTS) software product into a system, a purchaser may perform only integration testing at the system level (e.g. integration to the infrastructure and other systems) and at a later stage acceptance testing.



Note that the types of work products mentioned in Figure 2.2 on the left side of the V-model are just an illustration. In practice they come under many different names. References for generic work products include the Capability Maturity Model Integration (CMMi) or the 'Software life cycle processes' from ISO/IEC 12207. The CMMi is a framework for process improvement for both system engineering and software engineering. It provides guidance on where to focus and how, in order to increase the level of process maturity