Monday, August 30, 2010

SCENARIO TESTING

Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combinations of functions and variables rather than the more artificial combinations you get with domain testing or combinatorial test design.

VOLUME TESTING

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system

Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.

Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.

RECOVERY TESTING

Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

EXPLORATORY TESTING

This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

Exploratory software testing is a powerful and fun approach to testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn't get much respect in our field. It can be considered as “Scientific Thinking” at real time.

Friday, August 27, 2010

Bug Reporting

Introduction:

As testers, we all agree to the fact that the basic aim of the Tester is to decipher bugs. Whenever a build appears for testing, the primary objective is to find out as many bugs as possible from every corner of the application. To accomplish this task as perfection, we perform testing from various perspectives. We strain the application before us through various kinds of strainers like boundary value analysis, validation checks, verification checks, GUI, interoperability, integration tests, functional – business concepts checking, backendtesting (like using SQL commands into db or injections), security tests, and many more. This makes us to drill deep into the application as well as the business.

We would agree to the fact that Bug Awareness is of no use until it is well documented. Here comes the role of BUG REPORTS. The bug reports are our primary work product. This is what people outside thetesting group notices. These reports play an important role in the Software Development Life Cycle – in various phases as they are referenced by testers, developers, managers, top shots and not to forget the clients who these days demand for the test reports. So, the Bug Reports are remembered the most.

Once the bugs are reported by the testers and submitted to the developers to work upon, we often see some kinds of confrontations – there are humiliations which testers face sometimes, there are cold wars – nonetheless the discussions take the shape of mini quarrels – but at times testers anddevelopers still say the same thing or they are correct but the depiction of their understanding are different and that makes all the differences. In such a situation, we come to a stand-apart that the best tester is not the one who finds mostof the bugs or the one who embarrasses most programmers but is the one who gets most of the bugs fixed.

Bug Reporting – An Art:

The first aim of the Bug Report is to let the programmer see the failure. The Bug Report gives the detailed descriptions so that the programmers can make the Bug fail for them. In case, the Bug Report does not accomplish this mission, there can be back flows from the development team saying – not a bug, cannot reproduce and many other reasons.

Hence it is important that the BUG REPORT be prepared by the testers with utmost proficiency and specificity. It should basically describe the famous 3 What's, well described as:

What we did:

* Module, Page/Window – names that we navigate to
* Test data entered and selected
* Buttons and the order of clicking

What we saw:

* GUI Flaws
* Missing or No Validations
* Error messages
* Incorrect Navigations

What we expected to see:

* GUI Flaw: give screenshots with highlight
* Incorrect message – give correct language, message
* Validations – give correct validations
* Error messages – justify with screenshots
* Navigations – mention the actual pages

Pointers to effective reporting can be well derived from above three What's. These are:

1. BUG DESCRIPTION should be clearly identifiable – a bug description is a short statement that briefly describes what exactly a problem is. Might be a problem required 5-6 steps to be produced, but this statement should clearly identify what exactly a problem is. Problem might be a server error. But description should be clear saying Server Error occurs while saving a new record in the Add Contact window.

2. Bug should be reported after building a proper context – PRE-CONDITIONS for reproducing the bug should be defined so as to reach the exact point where bug can be reproduced. For example: If a server error appears while editing a record in the contacts list, then it should be well defined as a pre-condition to create a new contact and save successfully. Double click this created contact from the contacts list to open the contact details – make changes and hit save button.

3. STEPS should be clear with short and meaningful sentences – nobody would wish to study the entire paragraph of long complex words and sentences. Make your report step wise by numbering 1,2,3…Make each sentence small and clear. Only write those findings or observations which are necessary for this respective bug. Writing facts that are already known or something which does not help in reproducing a bug makes the report unnecessarily complex and lengthy.

4. Cite examples wherever necessary – combination of values, test data: Most of the times it happens that the bug can be reproduced only with a specific set of data or values. Hence, instead of writing ambiguous statement like enter an invalid phone number and hit save…one should mention the data/value entered….like enter the phone number as 012aaa@$%.- and save.

5. Give references to specifications – If any bug arises that is a contradictive to the SRS or any functional document of the project for that matter then it is always proactive to mention the section, page number for reference. For example: Refer page 14 of SRS section 2-14.

6. Report without passing any kind of judgment in the bug descriptions – the bug report should not be judgmental in any case as this leads to controversy and gives an impression of bossy. Remember, a tester should always be polite so as to keep his bug up and meaningful. Being judgmental makesdevelopers think as though testers know more than them and as a result gives birth to a psychological adversity. To avoid this, we can use the word suggestion – and discuss with thedevelopers or team lead about this. We can also refer to some application or some module or some page in the same application to strengthen our point.

7. Assign severity and priority – SEVERITY is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Urgent/Show – stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major areaof the users system is affected by the incident and It is significant to business processes.

Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.

Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.

PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:

High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.

Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

Wednesday, August 25, 2010

TESTING GLOSSARY

Adhoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability.

Agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. See also test driven development.

Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system.

Data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.

Random testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

Wednesday, August 11, 2010

ECP & BVA

Equivalence Testing

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.


Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.


• Test case Design for Equivalence partitioning

1. Good test case reduces by more than one the number of other test cases which must be developed
2. Good test case covers a large set of other possible cases
3. Classes of valid inputs
4. Classes of invalid inputs


Boundary testing

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also.

BVA guidelines include:

For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.

Wednesday, August 4, 2010

Test Director FAQ's

Q: What is TestDirector?
TestDirector is a test management tool produced by Mercury Interactive. Its four modules - Requirements, Test Plan, Test Lab and Defects Manager - are integrated to enable information to flow smoothly between different stages of the testing process. Completely Web-enabled, TestDirector supports communication and collaboration among distributed testing teams.
TestDirector has been classified in the following categories:
Defect Tracking
Testing and Analysis
Debugging
Automated Software Quality (ASQ)

Q: What is the use of Test Director software?
TestDirector is Mercury Interactive's software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles.
TestDirector is not used for executing any actual test activity but it is a test management tool used for Requirements Management, Test Plan, Test Lab, and Defects Management. Even if the individual test cases are not automated, TestDirector can make life much easier during the test cycles.

Q: How you integrated your automated scripts with TestDirector?
When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then

Q: Is there any possibility to restrict duplication of defects being created in TD?
No Way. The only thing we can do is to find the similar defects and delete or close them.

Q: What is Quality Center( Test Director)?
You can use Quality Center to create a project (central repository) of manual and automated tests and components, build test cycles, run tests and components, and report and track defects. You can also create reports and graphs to help you review the progress of test planning, runs, and defect tracking before a software release. When you work in QuickTest, you can create and save tests and components directly to your Quality Center project. You can run QuickTest tests or components from Quality Center and then use Quality Center to review and manage the results. You can also use Quality Center with Business Process Testing support to create business process tests, comprised of the components you create in either QuickTest or Quality Center with Business Process Testing support.

Q: After creating the test cases in excel and exported to TD. How does test director know the headings?
To export the test cases from spreadsheet to TD there are 8 steps. In 6th step we need to map the Td fields with corresponding Spreadsheet columns. Hence you are the mapping so you can map according to your specifications.

Q: How to use TestDirect like a Dashboard?
The new version of TD (TestDirector for Quality Center) should provide you with that. If you not want to upgrade, you have to design your own "start page", include the apps and bits you want to display, and use code to extract data from TD.
Q: Can you retrieve a test case once you have deleted them in Test Director ?
In Quality Center, if you delete entire folder with tests, the tests get stored in Unattached folder. But if you just deleted 'individual' tests, they are GONE! and can't be retrieved. Not sure if Test Director has the same behaviour/functionality.
There are 2 options in QC. one is remove and another is delete. the diffrence is that .once it is remove it is removed from test set but avail in QC directory. If it is deleted, you can't retrive. If you press delete that will delete from that QC directory also.

Q: How do we import testcases written in Excel to Test Director
Use Mecury Interactive Microsoft Excel Add-in for importing test cases written in excel sheet.
It is available on Add-ins page.
Select the rows in excel which you want to upload to TD
Then select export to TD option under tools menu

Q: Is it necessary to learn Test Director for beginners
Test director is a test mangement tool, it is used across all major organizations and is generally used for management of all test activities in organization.
It is important to learn this tool, but for beginners it is enough to understand how to log defects into it and how to run tests using it.
Q: Can you please explain the procedure of connecting TestDirector in QTP?
To connect to TD from QTP follow the steps...
Open Qtp ==> Tools ==> Select TestDirector Connection ==> In Server Connction Box Enter TD address(URL of TD) ==> Click Connect==> In project Connection Box Enter the Details Domain,Project,User name and Password ==> Click Connect
If you want to reconnect on startup check the reconnect on startup and save password for reconnection on startup.
Then close.

Q: What are the various types of reports in TestDirector?
For each and every phase we can get reports, like for requirements, test cases, and test run. There are some types of reports also available like report summary, progress report and requirements coverage report.
Each and every test director client tool consists of a menu bar Analysis. By using this menu you can crate reports in table format. You can generate graphs. All graphs options in maths are supported. And you can create various types of charts too.

Q: TD (Quality Center 9.0) how can you run automated test cases?
While designing your test steps in QC for automation tests in test plan module, Test Script tab is availble. You can generate script here or copy from your automatioon tool. While running your tests, it will ask for on which host you want to run. You need to select the system in your network. Then run it. Before going to run your script in a system, the automation tool, like WinRunner, must be installed on that system. Otherwise you will get an error.
Q: Can we add user defined fields to Test Director?
Yes. We can add the user defined fields using TD 8.0, But you need to have admin priviliges to this.

Q: How do we attach Excel sheet with test director?
This function is for getting datatable(excel sheet) in test director.
Try to use it and as vbs file and call this function to get ur datatable.
GetAttachment(FileName, OutPath)
FileName The name of the attachment that needs to be copied
OutPath The folder location where the file needs to be stored
Return value The full path where the file has been copied on the local file system

Example:
FilePath = GetAttachment("test.pdf", "C:")
MsgBox "Your file is here:" & FilePath

The GetAttachmentFromTest finds the attachment associated to the given test name and stores it in a local folder.

GetAttachmentFromTest(TestName, FileName, OutPath)

TestName The name of the test where the attachment is located
FileName The name of the attachment that need to be copied
OutPath The folder location where the file needs to be stored
Return value The full path where the file has been copied on the local file system

Example:
FilePath = GetAttachmentFromTest("Attachment", "hello.vbs", "C:aa")
MsgBox "Your file is here:" & FilePath

Q: What is the use of Test Lab in Test director?
Test Lab can be used to create a test set. You can add one or many test cases into a test set. Then run all test cases in a test set together and change the status as pass/fail.
Q: Can we map the Defects directly to the requirements (not thru the test cases) in the Test Director?
Yes.
Create your req. structure.
Create the test case structure and the test cases
Map the testcases to the apr. req.
Run and report bugs from your test cases in the test lab module.

The database structure in TD is mapping testcase to defects, only if you have created the bug from the apr. test case. Maybe you can update the mapping by using some code in the bug script module (from the customize project funktion), as fare as I know, its not possible to map defects directly to an req.

Q: How do I run reports from Test Director?
This is how you do it:
1. Open the test director project.
2. Display the requirements module.
3. Chose report
Analysis > Reports > Standard Requirements Report.

Q: Can we export the files from Test director to Excel Sheet? If yes then how?
Design tab -- Right click -> go to save as -> select excel and save it
Requirement tab -- Right click on main req/ click on export/ save as word, excel or other template. This would save all the child requirement.
Test plan tab-- only individual test can be exported. No parent--child export is possible.Select a test script. click on the design steps tab. right click anywhere on the open window. click on export and save as....
Test lab tab-- select a child group. Click on execution grid if it is not selected. right click anywhere . default save option is excel. but can be saved in doc and other formats. select 'all' or 'selected' option.
defects tab -- right click anywhere on the window, export all or 'selected' defects and save excel sheet or document.
Q: Can we upload test cases from an excel sheet into Test Director?
Yes, you can do that. Go to Add In menu in TestDirector, find the Excel add in, and install it in you machine. Now open excel, you can find the new menu option export to Test director. Rest of the procedure is self explanatory

Q: How can we map a single defect to two test scripts? Is there a way in test director so that we can state that defect defect X is same for test script A and test script B?
No way. When you run a script, you find and generate a defect report. In other words, every defect report is unique to a single test script.

Q: How can we create our own Defect Template from Test Director? Is it possible in Test Director? If possible how we can Create our Own Template?
You can not create your own template for defect reporting in Test Director but you can customize the Template in Test Director

Q: How can we export multiple test cases from TD in a single go?
Open any test and click on the tab 'design step'. Once it opens, you can right click on the first cell and export into any format .
Q: How to customize the reports generated?
This depends a lot of what you are interested in "reporting on". You have to combine both SQL and VB script to extract data from TD to Excel.
Its also possible to "customize" the standard reports given from the "analyze" tab, this is written in XML if you are familiar with this language.
If you log in to Mercury support you will be able to find a lot of code examples.

Q: How many tabs in TestDirector and explain them?
there are 4 tabs available in Testdirector.
1. Requirement -> to track the customer requirenments
2. Testplan -> to design the testcases & to store the testscripts
3. Testlab -> to exectue the testsets & track the results
4. Defect -> to log a defect & to track the logged defects

Q: How to map requirements with testcases in TestDirector?
1. In requirements TAB select coverage view.
2. Select requirement by clicking on Parent/Child or grand Child.
3. On right hand side (in Coverage View Window) another window will appear. It has two TABS (a) Tests Coverage (b) Details. Test Coverage TAB will be selected by default or you click it.
4. Click on Select Tests Button. A new window will appear on right hand side and you will see a list of all tests. You can select any test case you want to map with you requirement.
Q: How to use TestDirector in real time projects?
Once completed the preparing of the test cases.
1. Export the test cases in to Test Director. ( it will contain total 8 steps).
2. The test cases will be loaded in the Test Plan module.
3. Once the execution is started, we move the test cases from Test Plan tab to the Test Lab module.
4. In Test Lab, we execute the test cases and put as pass or fail or incomplete. We generate the graphs in the test lab for daily report and sent to the onsite (where ever you want to deliver).
5. If you got any defects and raise the defect in the defect module. When raising the defect ,attach the defect with the screen shot.

Q: How can we add requirements to test cases in Test Director?
You can add requirements to test cases in two ways; either from Requriements tab or Test Plan tab.
Navigate to the appropriate Requirement and right click, you can find the menu to map the test case and the vice versa is available in Test plan tab.

Q: What does Test Grid contains ?
The Test Grid displays all the tests in a TestDirector project.
The Test Grid contains the following key elements:
Test Grid toolbar, with buttons of commands commonly used when creating and modifying the Test Grid.
Grid filter, displaying the filter that is currently applied to a column.
Description tab, displaying a description of the selected test in the Test Grid.
History tab, displaying the changes made to a test. For each change, the grid displays the field name, date of the change, name of the person who made the change, and the new value.
Q: How to generate the graphs in Test Director ?
An1:
Open test director and then click the Analysis you will find three type of graphs
Planning Progress Graphs
Planning Summary Graphs
Defect Age Graph
Click any of one and you can generate the graphs..


An2:
The generation of graphs in the Test Director that to Test Lab module is :
1. Analysis
2. Graph
3. Graph Wizard
4.Select the graph type as Summary and click the Next button.
5.Select the show current tests and click the next button.
6.Select the Define a new filter and click the Filter button.
7. Select the test set and click the Ok button.
8.Select the Plan : subject and click the ok button.
9. Select the Plan: Status
10 Select the test set as x- Axis
11. Click the Finish button.

Q: What is the difference between Master test plan and test plan??
Master test plan is the doccumaent in which each and every functional point is validated.
Test case docuument contains test cases, Test case is the perception with which probability of finding the defect is more.

Q: What is the main purpose of storing requirements in Test Director?
In TestDirector(Requirement Tab) We Stores Our Project Requirement documents according to our modules or functionality of the applications. This helps us to makes sures that all requirements are covered when we trace developed Test Case/Test Script to the requirements.
This helps QA Manager to review what extent the requirements are covered.
Q: What are the 3 views and what is the purpose of each view?
The 3 views of requirment are:
1)Document View-tabulated view
2)Coverage View-establish a relationship between requirement and the test assosiated with them along with their execution status.Mostly the requirements are written in this view only.
3)Coverage analysis view-show a chart with requirementassociated with the test,and execution status of the test.

Q: How many types of reports can be generated using TestDirector?
Reports on TestDirector display information about test requirements, the test plan, test runs, and defect tracking. Reports can be generated from each TestDirector module using the default settings, or you can customize them. When customizing a report, you can apply filters and sort conditions, and determine the layout of the fields in the report. You can further customize the report by adding sub-reports. You can save the settings of your reports as favorite views and reload them as needed.

Q: How will you generate the defect ID in test director? Is it generated automatically or not?
The Defect ID will be generated automatically after the submission of the defect..
Q: How do you ensure that there are no duplication of bugs in Test Director?
In the defect tracking window, at the top we can see the find similar defect icon. If we click after writing our defect, if any of the tester already added the similar defect it will tell. Else we can add.

Q: Difference between WinRunner and Test Director?
WinRunner: Its an Automation Testing tool, used for automation of manual written Test Cases to Test Scripts and Regression Test also. Test Director: Its an Testing Management tool, used from Creating Test Plan,Preparation of testCases, execution of testcases and generating defect report.Also used for maintaining Test Scripts.

Q: How to add Test ID to TestPlan?
Create an object with a type = Number. Name it something like "Test_ID" in the Customize Entities area. Then go into the Workflow Script Editor to "TestPlan module script/TestPlan_Test_MoveTo" and insert the following:
if Test_Fields.Field("Your Object Name").Value <> Test_Fields.Field ("TS_TEST_ID").Value then Test_Fields.Field("Your Object Name").Value = Test_Fields.Field("TS_TEST_ID").Value end if
This will put an object on each test thet displays the Test ID Number.

More QTP Interview Questions

1. Full form of QTP ?

Quick Test Professional

2. What's the QTP ?

QTP is Mercury Interactive Functional Testing Tool.

3. Which scripting language used by QTP ?

QTP uses VB scripting.

4. What's the basic concept of QTP ?

QTP is based on two concept-
* Recording
* Playback

5. How many types of recording facility are available in QTP ?

QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

6. How many types of Parameters are available in QTP ?

QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

7. What's the QTP testing process ?

QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

8. What's the Active Screen ?

It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.

9. What's the Test Pane ?

Test Pane contains Tree View and Expert View tabs.

10. What's Data Table ?

It assists to you about parameterizing the test.

11. What's the Test Tree ?

It provides graphical representation of your operations which you have performed with your application.

12. Which all environment QTP supports ?

ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

13. How can you view the Test Tree ?

The Test Tree is displayed through Tree View tab.

14. What's the Expert View ?

Expert View display the Test Script.

15. Which keyword used for Nornam Recording ?

F3

16. Which keyword used for run the test script ?

F5

17. Which keyword used for stop the recording ?

F4

18. Which keyword used for Analog Recording ?

Ctrl+Shift+F4

19. Which keyword used for Low Level Recording ?

Ctrl+Shift+F3

20. Which keyword used for switch between Tree View and Expert View ?

Ctrl+Tab

21. What's the Transaction ?

You can measure how long it takes to run a section of your test by defining transactions.

22. Where you can view the results of the checkpoint ?

You can view the results of the checkpoints in the Test Result Window.

23. What's the Standard Checkpoint ?

Standard Checkpoints checks the property value of an object in your application or web page.

24. Which environment are supported by Standard Checkpoint ?

Standard Checkpoint are supported for all add-in environments.

25. What's the Image Checkpoint ?

Image Checkpoint check the value of an image in your application or web page.

26. Which environments are supported by Image Checkpoint ?

Image Checkpoint are supported only Web environment.

27. What's the Bitmap Checkpoint ?

Bitmap Checkpoint checks the bitmap images in your web page or application.

28. Which enviornment are supported by Bitmap Checkpoints ?

Bitmap checkpoints are supported all add-in environment.

29. What's the Table Checkpoints ?

Table Checkpoint checks the information with in a table.

30. Which environments are supported by Table Checkpoint ?

Table Checkpoints are supported only ActiveX environment.

31. What's the Text Checkpoint ?

Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.

32. Which environment are supported by Test Checkpoint ?

Text Checkpoint are supported all add-in environments

Monday, August 2, 2010

Severity levels

Severity level:

The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact.



Severity levels:

* High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
* Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
* Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.


Severity and Priority:

Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It’s relative. It shifts over time. And it’s a business decision.

Severity is an absolute: it’s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it’s still a high severity issue when it’s deferred to the next release. The severity hasn’t changed just because we’ve run out of time. The priority changed.

Severity Levels can be defined as follow:

S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

What is stub?

Stub is a dummy program or component, the code is not ready for testing, it's used for testing that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub.