Monday, October 25, 2010

What is difference between client server and Web Testing?

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application
is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

Test Director FAQ's

Q: What is TestDirector?

TestDirector is a test management tool produced by Mercury Interactive. Its four modules - Requirements, Test Plan, Test Lab and Defects Manager - are integrated to enable information to flow smoothly between different stages of the testing process. Completely Web-enabled, TestDirector supports communication and collaboration among distributed testing teams.
TestDirector has been classified in the following categories:
Defect Tracking
Testing and Analysis
Debugging
Automated Software Quality (ASQ)

Q: What is the use of Test Director software?


TestDirector is Mercury Interactive's software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles.
TestDirector is not used for executing any actual test activity but it is a test management tool used for Requirements Management, Test Plan, Test Lab, and Defects Management. Even if the individual test cases are not automated, TestDirector can make life much easier during the test cycles.

Q: How you integrated your automated scripts with TestDirector?

When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then

Q: Is there any possibility to restrict duplication of defects being created in TD?

No Way. The only thing we can do is to find the similar defects and delete or close them.

Q: What is Quality Center( Test Director)?


You can use Quality Center to create a project (central repository) of manual and automated tests and components, build test cycles, run tests and components, and report and track defects. You can also create reports and graphs to help you review the progress of test planning, runs, and defect tracking before a software release. When you work in QuickTest, you can create and save tests and components directly to your Quality Center project. You can run QuickTest tests or components from Quality Center and then use Quality Center to review and manage the results. You can also use Quality Center with Business Process Testing support to create business process tests, comprised of the components you create in either QuickTest or Quality Center with Business Process Testing support.

Q: After creating the test cases in excel and exported to TD. How does test director know the headings?

To export the test cases from spreadsheet to TD there are 8 steps. In 6th step we need to map the Td fields with corresponding Spreadsheet columns. Hence you are the mapping so you can map according to your specifications.

Q: How to use TestDirect like a Dashboard?

The new version of TD (TestDirector for Quality Center) should provide you with that. If you not want to upgrade, you have to design your own "start page", include the apps and bits you want to display, and use code to extract data from TD.

Q: Can you retrieve a test case once you have deleted them in Test Director ?


In Quality Center, if you delete entire folder with tests, the tests get stored in Unattached folder. But if you just deleted 'individual' tests, they are GONE! and can't be retrieved. Not sure if Test Director has the same behaviour/functionality.
There are 2 options in QC. one is remove and another is delete. the diffrence is that .once it is remove it is removed from test set but avail in QC directory. If it is deleted, you can't retrive. If you press delete that will delete from that QC directory also.

Q: How do we import testcases written in Excel to Test Director

Use Mecury Interactive Microsoft Excel Add-in for importing test cases written in excel sheet. It is available on Add-ins page. Select the rows in excel which you want to upload to TD. Then select export to TD option under tools menu

Q: Is it necessary to learn Test Director for beginners

Test director is a test mangement tool, it is used across all major organizations and is generally used for management of all test activities in organization.
It is important to learn this tool, but for beginners it is enough to understand how to log defects into it and how to run tests using it.

Q: Can you please explain the procedure of connecting TestDirector in QTP?


To connect to TD from QTP follow the steps...
Open Qtp ==> Tools ==> Select TestDirector Connection ==> In Server Connction Box Enter TD address(URL of TD) ==> Click Connect==> In project Connection Box Enter the Details Domain,Project,User name and Password ==> Click Connect
If you want to reconnect on startup check the reconnect on startup and save password for reconnection on startup.
Then close.

Q: What are the various types of reports in TestDirector?

For each and every phase we can get reports, like for requirements, test cases, and test run. There are some types of reports also available like report summary, progress report and requirements coverage report.
Each and every test director client tool consists of a menu bar Analysis. By using this menu you can crate reports in table format. You can generate graphs. All graphs options in maths are supported. And you can create various types of charts too.

Q: TD (Quality Center 9.0) how can you run automated test cases?


While designing your test steps in QC for automation tests in test plan module, Test Script tab is availble. You can generate script here or copy from your automatioon tool. While running your tests, it will ask for on which host you want to run. You need to select the system in your network. Then run it. Before going to run your script in a system, the automation tool, like WinRunner, must be installed on that system. Otherwise you will get an error.
Q: Can we add user defined fields to Test Director?
Yes. We can add the user defined fields using TD 8.0, But you need to have admin priviliges to this.

Q: How do we attach Excel sheet with test director?

This function is for getting datatable(excel sheet) in test director.
Try to use it and as vbs file and call this function to get ur datatable.
GetAttachment(FileName, OutPath)
FileName The name of the attachment that needs to be copied
OutPath The folder location where the file needs to be stored
Return value The full path where the file has been copied on the local file system

Example:
FilePath = GetAttachment("test.pdf", "C:")
MsgBox "Your file is here:" & FilePath

The GetAttachmentFromTest finds the attachment associated to the given test name and stores it in a local folder.

GetAttachmentFromTest(TestName, FileName, OutPath)

TestName The name of the test where the attachment is located
FileName The name of the attachment that need to be copied
OutPath The folder location where the file needs to be stored
Return value The full path where the file has been copied on the local file system

Example:
FilePath = GetAttachmentFromTest("Attachment", "hello.vbs", "C:aa")
MsgBox "Your file is here:" & FilePath

Q: What is the use of Test Lab in Test director?

Test Lab can be used to create a test set. You can add one or many test cases into a test set. Then run all test cases in a test set together and change the status as pass/fail.

Q: Can we map the Defects directly to the requirements (not thru the test cases) in the Test Director?

Yes.
Create your req. structure.
Create the test case structure and the test cases
Map the testcases to the apr. req.
Run and report bugs from your test cases in the test lab module.

The database structure in TD is mapping testcase to defects, only if you have created the bug from the apr. test case. Maybe you can update the mapping by using some code in the bug script module (from the customize project funktion), as fare as I know, its not possible to map defects directly to an req.

Q: How do I run reports from Test Director?

This is how you do it:
1. Open the test director project.
2. Display the requirements module.
3. Chose report
Analysis > Reports > Standard Requirements Report.

Q: Can we export the files from Test director to Excel Sheet? If yes then how?

Design tab -- Right click -> go to save as -> select excel and save it
Requirement tab -- Right click on main req/ click on export/ save as word, excel or other template. This would save all the child requirement.
Test plan tab-- only individual test can be exported. No parent--child export is possible.Select a test script. click on the design steps tab. right click anywhere on the open window. click on export and save as....
Test lab tab-- select a child group. Click on execution grid if it is not selected. right click anywhere . default save option is excel. but can be saved in doc and other formats. select 'all' or 'selected' option.
defects tab -- right click anywhere on the window, export all or 'selected' defects and save excel sheet or document.

Q: Can we upload test cases from an excel sheet into Test Director?

Yes, you can do that. Go to Add In menu in TestDirector, find the Excel add in, and install it in you machine. Now open excel, you can find the new menu option export to Test director. Rest of the procedure is self explanatory

Q: How can we map a single defect to two test scripts? Is there a way in test director so that we can state that defect defect X is same for test script A and test script B?

No way. When you run a script, you find and generate a defect report. In other words, every defect report is unique to a single test script.

Q: How can we create our own Defect Template from Test Director? Is it possible in Test Director? If possible how we can Create our Own Template?

You can not create your own template for defect reporting in Test Director but you can customize the Template in Test Director

Q: How can we export multiple test cases from TD in a single go?

Open any test and click on the tab 'design step'. Once it opens, you can right click on the first cell and export into any format .

Q: How to customize the reports generated?


This depends a lot of what you are interested in "reporting on". You have to combine both SQL and VB script to extract data from TD to Excel.
Its also possible to "customize" the standard reports given from the "analyze" tab, this is written in XML if you are familiar with this language.
If you log in to Mercury support you will be able to find a lot of code examples.

Q: How many tabs in TestDirector and explain them?


there are 4 tabs available in Testdirector.
1. Requirement -> to track the customer requirenments
2. Testplan -> to design the testcases & to store the testscripts
3. Testlab -> to exectue the testsets & track the results
4. Defect -> to log a defect & to track the logged defects

Q: How to map requirements with testcases in TestDirector?

1. In requirements TAB select coverage view.
2. Select requirement by clicking on Parent/Child or grand Child.
3. On right hand side (in Coverage View Window) another window will appear. It has two TABS (a) Tests Coverage (b) Details. Test Coverage TAB will be selected by default or you click it.
4. Click on Select Tests Button. A new window will appear on right hand side and you will see a list of all tests. You can select any test case you want to map with you requirement.

Q: How to use TestDirector in real time projects?

Once completed the preparing of the test cases.
1. Export the test cases in to Test Director. ( it will contain total 8 steps).
2. The test cases will be loaded in the Test Plan module.
3. Once the execution is started, we move the test cases from Test Plan tab to the Test Lab module.
4. In Test Lab, we execute the test cases and put as pass or fail or incomplete. We generate the graphs in the test lab for daily report and sent to the onsite (where ever you want to deliver).
5. If you got any defects and raise the defect in the defect module. When raising the defect ,attach the defect with the screen shot.

Q: How can we add requirements to test cases in Test Director?


You can add requirements to test cases in two ways; either from Requriements tab or Test Plan tab.
Navigate to the appropriate Requirement and right click, you can find the menu to map the test case and the vice versa is available in Test plan tab.

Q: What does Test Grid contains ?

The Test Grid displays all the tests in a TestDirector project.
The Test Grid contains the following key elements:
Test Grid toolbar, with buttons of commands commonly used when creating and modifying the Test Grid.
Grid filter, displaying the filter that is currently applied to a column.
Description tab, displaying a description of the selected test in the Test Grid.
History tab, displaying the changes made to a test. For each change, the grid displays the field name, date of the change, name of the person who made the change, and the new value.

Q: How to generate the graphs in Test Director ?

Ans1:

Open test director and then click the Analysis you will find three type of graphs
Planning Progress Graphs
Planning Summary Graphs
Defect Age Graph
Click any of one and you can generate the graphs..


An2:
The generation of graphs in the Test Director that to Test Lab module is :
1. Analysis
2. Graph
3. Graph Wizard
4.Select the graph type as Summary and click the Next button.
5.Select the show current tests and click the next button.
6.Select the Define a new filter and click the Filter button.
7. Select the test set and click the Ok button.
8.Select the Plan : subject and click the ok button.
9. Select the Plan: Status
10 Select the test set as x- Axis
11. Click the Finish button.

Q: What is the difference between Master test plan and test plan?


Master test plan is the doccumaent in which each and every functional point is validated.
Test case docuument contains test cases, Test case is the perception with which probability of finding the defect is more.

Q: What is the main purpose of storing requirements in Test Director?

In TestDirector(Requirement Tab) We Stores Our Project Requirement documents according to our modules or functionality of the applications. This helps us to makes sures that all requirements are covered when we trace developed Test Case/Test Script to the requirements. This helps QA Manager to review what extent the requirements are covered.

Q: What are the 3 views and what is the purpose of each view?


The 3 views of requirment are:
1)Document View-tabulated view
2)Coverage View-establish a relationship between requirement and the test assosiated with them along with their execution status.Mostly the requirements are written in this view only.
3)Coverage analysis view-show a chart with requirementassociated with the test,and execution status of the test.

Q: How many types of reports can be generated using TestDirector?

Reports on TestDirector display information about test requirements, the test plan, test runs, and defect tracking. Reports can be generated from each TestDirector module using the default settings, or you can customize them. When customizing a report, you can apply filters and sort conditions, and determine the layout of the fields in the report. You can further customize the report by adding sub-reports. You can save the settings of your reports as favorite views and reload them as needed.

Q: How will you generate the defect ID in test director? Is it generated automatically or not?

The Defect ID will be generated automatically after the submission of the defect..

Q: How do you ensure that there are no duplication of bugs in Test Director?

In the defect tracking window, at the top we can see the find similar defect icon. If we click after writing our defect, if any of the tester already added the similar defect it will tell. Else we can add.

Q: Difference between WinRunner and Test Director?


WinRunner: Its an Automation Testing tool, used for automation of manual written Test Cases to Test Scripts and Regression Test also. Test Director: Its an Testing Management tool, used from Creating Test Plan,Preparation of testCases, execution of testcases and generating defect report.Also used for maintaining Test Scripts.

Q: How to add Test ID to TestPlan?

Create an object with a type = Number. Name it something like "Test_ID" in the Customize Entities area. Then go into the Workflow Script Editor to "TestPlan module script/TestPlan_Test_MoveTo" and insert the following:
if Test_Fields.Field("Your Object Name").Value <> Test_Fields.Field ("TS_TEST_ID").Value then Test_Fields.Field("Your Object Name").Value = Test_Fields.Field("TS_TEST_ID").Value end if
This will put an object on each test thet displays the Test ID Number.

Tuesday, October 12, 2010

Android Glossary

.apk extension

The extension for an Android package file, which typically contains all of the files related to a single Android application. The file itself is a compressed collection of an AndroidManifest.xml file, application code (.dex files), resource files, and other files. A project is compiled into a single .apk file.

.dex extension

Android programs are compiled into .dex (Dalvik Executable) files, which are in turn zipped into a single .apk file on the device. .dex files can be created by automatically translating compiled applications written in the Java programming language.

Action

A description of something that an Intent sender wants done. An action is a string value assigned to an Intent. Action strings can be defined by Android or by a third-party developer. For example, android.intent.action.VIEW for a Web URL, or com.example.rumbler.SHAKE_PHONE for a custom application to vibrate the phone.

Activity

A single screen in an application, with supporting Java code, derived from the Activity class.

adb

Android Debug Bridge, a command-line debugging application shipped with the SDK. It provides tools to browse the device, copy tools on the device, and forward ports for debugging. See Using adb for more information.

Application

A collection of one or more activities, services, listeners, and intent receivers. An application has a single manifest, and is compiled into a single .apk file on the device.

Content Provider

A class built on ContentProvider that handles content query strings of a specific format to return data in a specific format. See Reading and writing data to a content provider for information on using content providers.

Content URI

A type of URI. See the URI entry.

Dalvik

The name of Android's virtual machine. The Dalvik VM is an interpreter-only virtual machine that executes files in the Dalvik Executable (.dex) format, a format that is optimized for efficient storage and memory-mappable execution. The virtual machine is register-based, and it can run classes compiled by a Java language compiler that have been transformed into its native format using the included "dx" tool. The VM runs on top of Posix-compliant operating systems, which it relies on for underlying functionality (such as threading and low level memory management). The Dalvik core class library is intended to provide a familiar development base for those used to programming with Java Standard Edition, but it is geared specifically to the needs of a small mobile device.

DDMS

Dalvik Debug Monitor Service, a GUI debugging application shipped with the SDK. It provides screen capture, log dump, and process examination capabilities. See Using the Dalvik Debug Monitor Server to learn more about this program.

Drawable

A compiled visual resource that can be used as a background, title, or other part of the screen. It is compiled into an android.graphics.drawable subclass.

Intent

A class (Intent) that contains several fields describing what a caller would like to do. The caller sends this intent to Android's intent resolver, which looks through the intent filters of all applications to find the activity most suited to handle this intent. Intent fields include the desired action, a category, a data string, the MIME type of the data, a handling class, and other restrictions.

Intent Filter

Activities and intent receivers include one or more filters in their manifest to describe what kinds of intents or messages they can handle or want to receive. An intent filter lists a set of requirements, such as data type, action requested, and URI format, that the Intent or message must fulfill. For Activities, Android searches for the Activity with the most closely matching valid match between the Intent and the activity filter. For messages, Android will forward a message to all receivers with matching intent filters.

Intent Receiver

An application class that listens for messages broadcast by calling Context.broadcastIntent(). For example code, see Listening for and broadcasting global messages.

Layout resource

An XML file that describes the layout of an Activity screen.

Manifest

An XML file associated with each Application that describes the various activies, intent filters, services, and other items that it exposes. See AndroidManifest.xml File Details.

Nine-patch / 9-patch / Ninepatch image

A resizeable bitmap resource that can be used for backgrounds or other images on the device. See Nine-Patch Stretchable Image for more information.

Query String

A type of URI. See the URI entry.

Resource

A user-supplied XML, bitmap, or other file, entered into an application build process, which can later be loaded from code. Android can accept resources of many types; see Resources for a full description. Application-defined resources should be stored in the res/ subfolders.

Service

A class that runs in the background to perform various persistent actions, such as playing music or monitoring network activity.

Theme

A set of properties (text size, background color, and so on) bundled together to define various default display settings. Android provides a few standard themes, listed in R.style (starting with "Theme_").

URIs

Android uses URI strings both for requesting data (e.g., a list of contacts) and for requesting actions (e.g., opening a Web page in a browser). Both are valid URI strings, but have different values. All requests for data must start with the string "content://". Action strings are valid URIs that can be handled appropriately by applications on the device; for example, a URI starting with "http://" will be handled by the browser.

Monday, August 30, 2010

SCENARIO TESTING

Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combinations of functions and variables rather than the more artificial combinations you get with domain testing or combinatorial test design.

VOLUME TESTING

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system

Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.

Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.

RECOVERY TESTING

Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

EXPLORATORY TESTING

This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

Exploratory software testing is a powerful and fun approach to testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn't get much respect in our field. It can be considered as “Scientific Thinking” at real time.

Friday, August 27, 2010

Bug Reporting

Introduction:

As testers, we all agree to the fact that the basic aim of the Tester is to decipher bugs. Whenever a build appears for testing, the primary objective is to find out as many bugs as possible from every corner of the application. To accomplish this task as perfection, we perform testing from various perspectives. We strain the application before us through various kinds of strainers like boundary value analysis, validation checks, verification checks, GUI, interoperability, integration tests, functional – business concepts checking, backendtesting (like using SQL commands into db or injections), security tests, and many more. This makes us to drill deep into the application as well as the business.

We would agree to the fact that Bug Awareness is of no use until it is well documented. Here comes the role of BUG REPORTS. The bug reports are our primary work product. This is what people outside thetesting group notices. These reports play an important role in the Software Development Life Cycle – in various phases as they are referenced by testers, developers, managers, top shots and not to forget the clients who these days demand for the test reports. So, the Bug Reports are remembered the most.

Once the bugs are reported by the testers and submitted to the developers to work upon, we often see some kinds of confrontations – there are humiliations which testers face sometimes, there are cold wars – nonetheless the discussions take the shape of mini quarrels – but at times testers anddevelopers still say the same thing or they are correct but the depiction of their understanding are different and that makes all the differences. In such a situation, we come to a stand-apart that the best tester is not the one who finds mostof the bugs or the one who embarrasses most programmers but is the one who gets most of the bugs fixed.

Bug Reporting – An Art:

The first aim of the Bug Report is to let the programmer see the failure. The Bug Report gives the detailed descriptions so that the programmers can make the Bug fail for them. In case, the Bug Report does not accomplish this mission, there can be back flows from the development team saying – not a bug, cannot reproduce and many other reasons.

Hence it is important that the BUG REPORT be prepared by the testers with utmost proficiency and specificity. It should basically describe the famous 3 What's, well described as:

What we did:

* Module, Page/Window – names that we navigate to
* Test data entered and selected
* Buttons and the order of clicking

What we saw:

* GUI Flaws
* Missing or No Validations
* Error messages
* Incorrect Navigations

What we expected to see:

* GUI Flaw: give screenshots with highlight
* Incorrect message – give correct language, message
* Validations – give correct validations
* Error messages – justify with screenshots
* Navigations – mention the actual pages

Pointers to effective reporting can be well derived from above three What's. These are:

1. BUG DESCRIPTION should be clearly identifiable – a bug description is a short statement that briefly describes what exactly a problem is. Might be a problem required 5-6 steps to be produced, but this statement should clearly identify what exactly a problem is. Problem might be a server error. But description should be clear saying Server Error occurs while saving a new record in the Add Contact window.

2. Bug should be reported after building a proper context – PRE-CONDITIONS for reproducing the bug should be defined so as to reach the exact point where bug can be reproduced. For example: If a server error appears while editing a record in the contacts list, then it should be well defined as a pre-condition to create a new contact and save successfully. Double click this created contact from the contacts list to open the contact details – make changes and hit save button.

3. STEPS should be clear with short and meaningful sentences – nobody would wish to study the entire paragraph of long complex words and sentences. Make your report step wise by numbering 1,2,3…Make each sentence small and clear. Only write those findings or observations which are necessary for this respective bug. Writing facts that are already known or something which does not help in reproducing a bug makes the report unnecessarily complex and lengthy.

4. Cite examples wherever necessary – combination of values, test data: Most of the times it happens that the bug can be reproduced only with a specific set of data or values. Hence, instead of writing ambiguous statement like enter an invalid phone number and hit save…one should mention the data/value entered….like enter the phone number as 012aaa@$%.- and save.

5. Give references to specifications – If any bug arises that is a contradictive to the SRS or any functional document of the project for that matter then it is always proactive to mention the section, page number for reference. For example: Refer page 14 of SRS section 2-14.

6. Report without passing any kind of judgment in the bug descriptions – the bug report should not be judgmental in any case as this leads to controversy and gives an impression of bossy. Remember, a tester should always be polite so as to keep his bug up and meaningful. Being judgmental makesdevelopers think as though testers know more than them and as a result gives birth to a psychological adversity. To avoid this, we can use the word suggestion – and discuss with thedevelopers or team lead about this. We can also refer to some application or some module or some page in the same application to strengthen our point.

7. Assign severity and priority – SEVERITY is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Urgent/Show – stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major areaof the users system is affected by the incident and It is significant to business processes.

Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.

Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.

PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:

High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.

Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

Wednesday, August 25, 2010

TESTING GLOSSARY

Adhoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability.

Agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. See also test driven development.

Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system.

Data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.

Random testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

Wednesday, August 11, 2010

ECP & BVA

Equivalence Testing

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.


Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.


• Test case Design for Equivalence partitioning

1. Good test case reduces by more than one the number of other test cases which must be developed
2. Good test case covers a large set of other possible cases
3. Classes of valid inputs
4. Classes of invalid inputs


Boundary testing

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also.

BVA guidelines include:

For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.

Wednesday, August 4, 2010

Test Director FAQ's

Q: What is TestDirector?
TestDirector is a test management tool produced by Mercury Interactive. Its four modules - Requirements, Test Plan, Test Lab and Defects Manager - are integrated to enable information to flow smoothly between different stages of the testing process. Completely Web-enabled, TestDirector supports communication and collaboration among distributed testing teams.
TestDirector has been classified in the following categories:
Defect Tracking
Testing and Analysis
Debugging
Automated Software Quality (ASQ)

Q: What is the use of Test Director software?
TestDirector is Mercury Interactive's software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles.
TestDirector is not used for executing any actual test activity but it is a test management tool used for Requirements Management, Test Plan, Test Lab, and Defects Management. Even if the individual test cases are not automated, TestDirector can make life much easier during the test cycles.

Q: How you integrated your automated scripts with TestDirector?
When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then

Q: Is there any possibility to restrict duplication of defects being created in TD?
No Way. The only thing we can do is to find the similar defects and delete or close them.

Q: What is Quality Center( Test Director)?
You can use Quality Center to create a project (central repository) of manual and automated tests and components, build test cycles, run tests and components, and report and track defects. You can also create reports and graphs to help you review the progress of test planning, runs, and defect tracking before a software release. When you work in QuickTest, you can create and save tests and components directly to your Quality Center project. You can run QuickTest tests or components from Quality Center and then use Quality Center to review and manage the results. You can also use Quality Center with Business Process Testing support to create business process tests, comprised of the components you create in either QuickTest or Quality Center with Business Process Testing support.

Q: After creating the test cases in excel and exported to TD. How does test director know the headings?
To export the test cases from spreadsheet to TD there are 8 steps. In 6th step we need to map the Td fields with corresponding Spreadsheet columns. Hence you are the mapping so you can map according to your specifications.

Q: How to use TestDirect like a Dashboard?
The new version of TD (TestDirector for Quality Center) should provide you with that. If you not want to upgrade, you have to design your own "start page", include the apps and bits you want to display, and use code to extract data from TD.
Q: Can you retrieve a test case once you have deleted them in Test Director ?
In Quality Center, if you delete entire folder with tests, the tests get stored in Unattached folder. But if you just deleted 'individual' tests, they are GONE! and can't be retrieved. Not sure if Test Director has the same behaviour/functionality.
There are 2 options in QC. one is remove and another is delete. the diffrence is that .once it is remove it is removed from test set but avail in QC directory. If it is deleted, you can't retrive. If you press delete that will delete from that QC directory also.

Q: How do we import testcases written in Excel to Test Director
Use Mecury Interactive Microsoft Excel Add-in for importing test cases written in excel sheet.
It is available on Add-ins page.
Select the rows in excel which you want to upload to TD
Then select export to TD option under tools menu

Q: Is it necessary to learn Test Director for beginners
Test director is a test mangement tool, it is used across all major organizations and is generally used for management of all test activities in organization.
It is important to learn this tool, but for beginners it is enough to understand how to log defects into it and how to run tests using it.
Q: Can you please explain the procedure of connecting TestDirector in QTP?
To connect to TD from QTP follow the steps...
Open Qtp ==> Tools ==> Select TestDirector Connection ==> In Server Connction Box Enter TD address(URL of TD) ==> Click Connect==> In project Connection Box Enter the Details Domain,Project,User name and Password ==> Click Connect
If you want to reconnect on startup check the reconnect on startup and save password for reconnection on startup.
Then close.

Q: What are the various types of reports in TestDirector?
For each and every phase we can get reports, like for requirements, test cases, and test run. There are some types of reports also available like report summary, progress report and requirements coverage report.
Each and every test director client tool consists of a menu bar Analysis. By using this menu you can crate reports in table format. You can generate graphs. All graphs options in maths are supported. And you can create various types of charts too.

Q: TD (Quality Center 9.0) how can you run automated test cases?
While designing your test steps in QC for automation tests in test plan module, Test Script tab is availble. You can generate script here or copy from your automatioon tool. While running your tests, it will ask for on which host you want to run. You need to select the system in your network. Then run it. Before going to run your script in a system, the automation tool, like WinRunner, must be installed on that system. Otherwise you will get an error.
Q: Can we add user defined fields to Test Director?
Yes. We can add the user defined fields using TD 8.0, But you need to have admin priviliges to this.

Q: How do we attach Excel sheet with test director?
This function is for getting datatable(excel sheet) in test director.
Try to use it and as vbs file and call this function to get ur datatable.
GetAttachment(FileName, OutPath)
FileName The name of the attachment that needs to be copied
OutPath The folder location where the file needs to be stored
Return value The full path where the file has been copied on the local file system

Example:
FilePath = GetAttachment("test.pdf", "C:")
MsgBox "Your file is here:" & FilePath

The GetAttachmentFromTest finds the attachment associated to the given test name and stores it in a local folder.

GetAttachmentFromTest(TestName, FileName, OutPath)

TestName The name of the test where the attachment is located
FileName The name of the attachment that need to be copied
OutPath The folder location where the file needs to be stored
Return value The full path where the file has been copied on the local file system

Example:
FilePath = GetAttachmentFromTest("Attachment", "hello.vbs", "C:aa")
MsgBox "Your file is here:" & FilePath

Q: What is the use of Test Lab in Test director?
Test Lab can be used to create a test set. You can add one or many test cases into a test set. Then run all test cases in a test set together and change the status as pass/fail.
Q: Can we map the Defects directly to the requirements (not thru the test cases) in the Test Director?
Yes.
Create your req. structure.
Create the test case structure and the test cases
Map the testcases to the apr. req.
Run and report bugs from your test cases in the test lab module.

The database structure in TD is mapping testcase to defects, only if you have created the bug from the apr. test case. Maybe you can update the mapping by using some code in the bug script module (from the customize project funktion), as fare as I know, its not possible to map defects directly to an req.

Q: How do I run reports from Test Director?
This is how you do it:
1. Open the test director project.
2. Display the requirements module.
3. Chose report
Analysis > Reports > Standard Requirements Report.

Q: Can we export the files from Test director to Excel Sheet? If yes then how?
Design tab -- Right click -> go to save as -> select excel and save it
Requirement tab -- Right click on main req/ click on export/ save as word, excel or other template. This would save all the child requirement.
Test plan tab-- only individual test can be exported. No parent--child export is possible.Select a test script. click on the design steps tab. right click anywhere on the open window. click on export and save as....
Test lab tab-- select a child group. Click on execution grid if it is not selected. right click anywhere . default save option is excel. but can be saved in doc and other formats. select 'all' or 'selected' option.
defects tab -- right click anywhere on the window, export all or 'selected' defects and save excel sheet or document.
Q: Can we upload test cases from an excel sheet into Test Director?
Yes, you can do that. Go to Add In menu in TestDirector, find the Excel add in, and install it in you machine. Now open excel, you can find the new menu option export to Test director. Rest of the procedure is self explanatory

Q: How can we map a single defect to two test scripts? Is there a way in test director so that we can state that defect defect X is same for test script A and test script B?
No way. When you run a script, you find and generate a defect report. In other words, every defect report is unique to a single test script.

Q: How can we create our own Defect Template from Test Director? Is it possible in Test Director? If possible how we can Create our Own Template?
You can not create your own template for defect reporting in Test Director but you can customize the Template in Test Director

Q: How can we export multiple test cases from TD in a single go?
Open any test and click on the tab 'design step'. Once it opens, you can right click on the first cell and export into any format .
Q: How to customize the reports generated?
This depends a lot of what you are interested in "reporting on". You have to combine both SQL and VB script to extract data from TD to Excel.
Its also possible to "customize" the standard reports given from the "analyze" tab, this is written in XML if you are familiar with this language.
If you log in to Mercury support you will be able to find a lot of code examples.

Q: How many tabs in TestDirector and explain them?
there are 4 tabs available in Testdirector.
1. Requirement -> to track the customer requirenments
2. Testplan -> to design the testcases & to store the testscripts
3. Testlab -> to exectue the testsets & track the results
4. Defect -> to log a defect & to track the logged defects

Q: How to map requirements with testcases in TestDirector?
1. In requirements TAB select coverage view.
2. Select requirement by clicking on Parent/Child or grand Child.
3. On right hand side (in Coverage View Window) another window will appear. It has two TABS (a) Tests Coverage (b) Details. Test Coverage TAB will be selected by default or you click it.
4. Click on Select Tests Button. A new window will appear on right hand side and you will see a list of all tests. You can select any test case you want to map with you requirement.
Q: How to use TestDirector in real time projects?
Once completed the preparing of the test cases.
1. Export the test cases in to Test Director. ( it will contain total 8 steps).
2. The test cases will be loaded in the Test Plan module.
3. Once the execution is started, we move the test cases from Test Plan tab to the Test Lab module.
4. In Test Lab, we execute the test cases and put as pass or fail or incomplete. We generate the graphs in the test lab for daily report and sent to the onsite (where ever you want to deliver).
5. If you got any defects and raise the defect in the defect module. When raising the defect ,attach the defect with the screen shot.

Q: How can we add requirements to test cases in Test Director?
You can add requirements to test cases in two ways; either from Requriements tab or Test Plan tab.
Navigate to the appropriate Requirement and right click, you can find the menu to map the test case and the vice versa is available in Test plan tab.

Q: What does Test Grid contains ?
The Test Grid displays all the tests in a TestDirector project.
The Test Grid contains the following key elements:
Test Grid toolbar, with buttons of commands commonly used when creating and modifying the Test Grid.
Grid filter, displaying the filter that is currently applied to a column.
Description tab, displaying a description of the selected test in the Test Grid.
History tab, displaying the changes made to a test. For each change, the grid displays the field name, date of the change, name of the person who made the change, and the new value.
Q: How to generate the graphs in Test Director ?
An1:
Open test director and then click the Analysis you will find three type of graphs
Planning Progress Graphs
Planning Summary Graphs
Defect Age Graph
Click any of one and you can generate the graphs..


An2:
The generation of graphs in the Test Director that to Test Lab module is :
1. Analysis
2. Graph
3. Graph Wizard
4.Select the graph type as Summary and click the Next button.
5.Select the show current tests and click the next button.
6.Select the Define a new filter and click the Filter button.
7. Select the test set and click the Ok button.
8.Select the Plan : subject and click the ok button.
9. Select the Plan: Status
10 Select the test set as x- Axis
11. Click the Finish button.

Q: What is the difference between Master test plan and test plan??
Master test plan is the doccumaent in which each and every functional point is validated.
Test case docuument contains test cases, Test case is the perception with which probability of finding the defect is more.

Q: What is the main purpose of storing requirements in Test Director?
In TestDirector(Requirement Tab) We Stores Our Project Requirement documents according to our modules or functionality of the applications. This helps us to makes sures that all requirements are covered when we trace developed Test Case/Test Script to the requirements.
This helps QA Manager to review what extent the requirements are covered.
Q: What are the 3 views and what is the purpose of each view?
The 3 views of requirment are:
1)Document View-tabulated view
2)Coverage View-establish a relationship between requirement and the test assosiated with them along with their execution status.Mostly the requirements are written in this view only.
3)Coverage analysis view-show a chart with requirementassociated with the test,and execution status of the test.

Q: How many types of reports can be generated using TestDirector?
Reports on TestDirector display information about test requirements, the test plan, test runs, and defect tracking. Reports can be generated from each TestDirector module using the default settings, or you can customize them. When customizing a report, you can apply filters and sort conditions, and determine the layout of the fields in the report. You can further customize the report by adding sub-reports. You can save the settings of your reports as favorite views and reload them as needed.

Q: How will you generate the defect ID in test director? Is it generated automatically or not?
The Defect ID will be generated automatically after the submission of the defect..
Q: How do you ensure that there are no duplication of bugs in Test Director?
In the defect tracking window, at the top we can see the find similar defect icon. If we click after writing our defect, if any of the tester already added the similar defect it will tell. Else we can add.

Q: Difference between WinRunner and Test Director?
WinRunner: Its an Automation Testing tool, used for automation of manual written Test Cases to Test Scripts and Regression Test also. Test Director: Its an Testing Management tool, used from Creating Test Plan,Preparation of testCases, execution of testcases and generating defect report.Also used for maintaining Test Scripts.

Q: How to add Test ID to TestPlan?
Create an object with a type = Number. Name it something like "Test_ID" in the Customize Entities area. Then go into the Workflow Script Editor to "TestPlan module script/TestPlan_Test_MoveTo" and insert the following:
if Test_Fields.Field("Your Object Name").Value <> Test_Fields.Field ("TS_TEST_ID").Value then Test_Fields.Field("Your Object Name").Value = Test_Fields.Field("TS_TEST_ID").Value end if
This will put an object on each test thet displays the Test ID Number.

More QTP Interview Questions

1. Full form of QTP ?

Quick Test Professional

2. What's the QTP ?

QTP is Mercury Interactive Functional Testing Tool.

3. Which scripting language used by QTP ?

QTP uses VB scripting.

4. What's the basic concept of QTP ?

QTP is based on two concept-
* Recording
* Playback

5. How many types of recording facility are available in QTP ?

QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

6. How many types of Parameters are available in QTP ?

QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

7. What's the QTP testing process ?

QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

8. What's the Active Screen ?

It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.

9. What's the Test Pane ?

Test Pane contains Tree View and Expert View tabs.

10. What's Data Table ?

It assists to you about parameterizing the test.

11. What's the Test Tree ?

It provides graphical representation of your operations which you have performed with your application.

12. Which all environment QTP supports ?

ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL

13. How can you view the Test Tree ?

The Test Tree is displayed through Tree View tab.

14. What's the Expert View ?

Expert View display the Test Script.

15. Which keyword used for Nornam Recording ?

F3

16. Which keyword used for run the test script ?

F5

17. Which keyword used for stop the recording ?

F4

18. Which keyword used for Analog Recording ?

Ctrl+Shift+F4

19. Which keyword used for Low Level Recording ?

Ctrl+Shift+F3

20. Which keyword used for switch between Tree View and Expert View ?

Ctrl+Tab

21. What's the Transaction ?

You can measure how long it takes to run a section of your test by defining transactions.

22. Where you can view the results of the checkpoint ?

You can view the results of the checkpoints in the Test Result Window.

23. What's the Standard Checkpoint ?

Standard Checkpoints checks the property value of an object in your application or web page.

24. Which environment are supported by Standard Checkpoint ?

Standard Checkpoint are supported for all add-in environments.

25. What's the Image Checkpoint ?

Image Checkpoint check the value of an image in your application or web page.

26. Which environments are supported by Image Checkpoint ?

Image Checkpoint are supported only Web environment.

27. What's the Bitmap Checkpoint ?

Bitmap Checkpoint checks the bitmap images in your web page or application.

28. Which enviornment are supported by Bitmap Checkpoints ?

Bitmap checkpoints are supported all add-in environment.

29. What's the Table Checkpoints ?

Table Checkpoint checks the information with in a table.

30. Which environments are supported by Table Checkpoint ?

Table Checkpoints are supported only ActiveX environment.

31. What's the Text Checkpoint ?

Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.

32. Which environment are supported by Test Checkpoint ?

Text Checkpoint are supported all add-in environments

Monday, August 2, 2010

Severity levels

Severity level:

The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact.



Severity levels:

* High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
* Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
* Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.


Severity and Priority:

Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It’s relative. It shifts over time. And it’s a business decision.

Severity is an absolute: it’s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it’s still a high severity issue when it’s deferred to the next release. The severity hasn’t changed just because we’ve run out of time. The priority changed.

Severity Levels can be defined as follow:

S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

What is stub?

Stub is a dummy program or component, the code is not ready for testing, it's used for testing that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub.

Saturday, July 31, 2010

How to Perform Web Testing

Web testing checklist

1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing




1) Functionality Testing:

Test for - all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.
Check all the links:
• Test the outgoing links from all the pages from specific domain under test.
• Test all internal links.
• Test links jumping on the same pages.
• Test links used to send the email to admin or other users from web pages.
• Test to check if there are any orphan pages.
• Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:

Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?
• First check all the validations on each field.
• Check for the default values of fields.
• Wrong inputs to the fields in the forms.
• Options to create forms if any, form delete, view or modify the forms.

Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.

Cookies testing:

Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the session’s ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)

Validate your HTML/CSS:

If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawl able to different search engines.
Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:

Test for navigation:

Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.

Content checking:

Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing

Other user information for user help:

Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap. “Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.


3) Interface Testing:


The main interfaces are:

Web server and application server interface
Application server and Database server interface. Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?

4) Compatibility Testing:



Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:
• Browser compatibility
• Operating system compatibility
• Mobile browsing
• Printing options


Browser compatibility:


In my web-testing career I have experienced this as most influencing part on web site testing. Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application. Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.


OS compatibility:


Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems. Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.


Mobile browsing:

This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.
Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.


5) Performance testing:

Web application should sustain to heavy load. Web performance testing should include:

Web Load Testing
Web Stress Testing

Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors.


6) Security Testing:


Following are some test cases for web security testing:
• Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
• If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
• Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
• Web directories or files should not be accessible directly unless given download option.
• Test the CAPTCHA for automates scripts logins.
• Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
• All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

Friday, July 30, 2010

QTP INTERVIEW QUESTIONS

1. How to create basic scripts from a manual test case in QTP?
2. How to add verification steps to tests?
3. How to use custom checkpoints in QuickTest Professional?
4. How to use database checkpoints?
5. How to manage objects in the Object Repository in QuickTest Professional?
6. How to parameterize tests?
7. How to customize checkpoints with parameters?
8. How to run an integrated test scenario using Multiple Actions?
9. How to use the QTP Step Generator?
10. How to use debug tools?
11. How to create virtual objects?
12. What is the difference between QTP Analog and Low-Level recording
modes?
13. Please describe Object and Smart Identification?
14. What is the difference between Per-Action vs. Shared Object Repositories?


Questions on Basics of the QTP functionality:


1. What are the main benefits of QuickTest Professional?
2. What is Add-In Manager in QTP?
3. What QTP Options do you know?
4. How to Identify Objects and their Properties?
5. What is the Object Repository?
6. How to Add Synchronization Steps?
7. How to Set the Global Sync Timeouts in QTP?
8. What is Regular Expressions and how to use them?
9. How to Create Data-Driven tests?
10. What are Checkpoints with Parameters?
11. What is the difference between Global and Local Data Sheets?
12. How to create Reusable and Multiple Actions?
13. Describe the benefits of the Step Generator.
14. What are the main options that are available in the Step Generator dialog
box?
15. What is Exception Handling?
16. What is Recovery Scenario Wizard in QTP?
17. Describe the purpose of a Database Checkpoint
18. What is the difference between Analog Recording and Low-Level Recording
in automation tools?
19. Describe Per-Action vs. Shared Object Repositories
20. Describe how and when Smart Identification is used

BUG LIFECYCLE

Thursday, July 29, 2010

SOFTWARE TESTING Interview questions

1. What is Software Testing?
Ans. Testing involves operation of a system or application under controlled
conditions and evaluating the results. The controlled conditions should include
both normal and abnormal conditions.
Testing is a process of executing a program with the intend of finding the
errors.

2. What is the Purpose of Testing?
Ans. The purpose of testing is
1· To uncover hidden errors
2· To achieve the maximum usability of the system
3· To Demonstrate expected performance of the system

3. What types of testing do testers perform?
Ans. Two types of testing 1.White Box Testing 2.Black Box Testing.

4. What is the Outcome of Testing?
Ans. The outcome of testing will be a stable application which meets the customer
Req's.

5. What kind of testing have you done?
Ans. Usability, Functionality, System testing, regression testing, UAT
(it depends on the person).

6. What is the need for testing?
Ans. The Primary need is to match requirements get satisfied with the
functionality
and also to answer two questions
1· Whether the system is doing what it supposes to do?
2· Whether the system is not performing what it is not suppose to do?

7. What are the entry criteria for Functionality and Performance testing?
Ans. Entry criteria for Functionality testing is Functional Specification /BRS
(CRS)/User Manual. An integrated application, Stable for testing.
Entry criteria for Performance testing is successfully of functional testing,
once all the requirements related to functional are covered and tested, and
approved or validated.

8. What is test metrics?
A. The objective of Test Metrics is to capture the planned and actual quantities the effort, time and resources required to complete all the phases of Testing of the SW Project.

9. Why do you go for White box testing, when Black box testing is available?
A. A benchmark that certifies Commercial (Business) aspects and also functional
(technical) aspects is objectives of black box testing. Here loops, structures,
arrays, conditions, files, etc are very micro level but they arc Basement for
any application, So White box takes these things in Macro level and test these
things Even though Black box testing is available, we should go for White box testing
also, to check the correctness of code and for integrating the modules.

10. What are the entry criteria for Automation testing?
A. Application should be stable. Clear Design and Flow of the application is
needed.

11. When to start and Stop Testing?
A. This can be difficult to determine. Many modern software applications are so
complex, and run in such an interdependent environment, that complete testing
can never be done.

Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a certain level
Beta or alpha testing period ends

12.What is Quality?
A. It ensures that software is a Bug free, delivered in time, within budget,
meets customer requirements and maintainable. Quality standards are different
in various areas like accounting department might define quality in terms of
Profit.

13. What is Baseline document?
A. The review and approved document is called as baseline document (i.e.) Test
plan, SRS.

14. What is verification?
A. To check whether we are developing the right product according to the
customer requirements r not .It is a static process.

15. What is validation?
A. To check whether we have developed the product according to the customer
requirements r not. It is a Dynamic process.

16. What is quality assurance?
A. Quality Assurance measures the quality of processes used to create a quality
product.
1.It is a system of management activities.
2.It is a preventive process.
3.It applies for entire life cycle.
4.Deals with Process.

17.What is quality control?
A. Quality control measures the quality of a product
1.It is a specific part of the QA procedure.
2.It is a corrective process.
3.It applies for particular product.
4.Deals with the product.

18.What is SDLC and TDLC?
A. Software development life cycle(SDLC) is life cycle of a project from
starting to ending of the project.
1.Requiremnts Specification. 2.Analysis
3.Design 4.Coding
5.Testing 6.User acceptance test(UAT)
7.Maintainance
Software Test Life Cycle(STLC) is a life cycle of the testing process.
1.Requiremnts Specification. 2.Planning
3.Test case Design. 4.Execution
5.Bug Reporting. 6.Maintainance

19.What are the Qualities of a Tester?
A. Tester should have qualities like
1.ability to break 2.paitence 3.communication
4.Presentation 5.team work. 6.negative thinking with good judgment skills

20.When to start and Stop Testing?
A. repeat

21.What are the various levels of testing?
A. The various levels of testing like
1· Ad - Hoc testing
2. Sanity Test
3. Regression Testing
4. Functional testing
5· Web Testing

22.What are the types of testing you know and you experienced?
A. I am experienced in Black Box testing.

24.After completing testing, what would you deliver to the client?
A. It is depend upon what you have specified in the test plan document.
the contents delivers to the clients is nothing but Test Deliverables.
1.Test plan document 2.Master test case document 3.Test summary Report.
4.Defect Reports.

25.What is a Test Bed?
A. Test bed means under what test environment (Hardware, software set up) the
application will run smoothly.

27.Why do you go for Test Bed?
A. We will prepare test bed bcoz first we need to identify under which
environment (Hardware,software) the application will run smoothly,then
only we can run the application smoothly without any intereptions.

28.What is Severity and Priority and who will decide what?
A. Severity and priority will be assigned for a particular bug to know the
importance of the bug.
Severity:How sevierly the bug is effecting the application.
Priority:Informing to the developer which bug to be fix first.

29.Can Automation testing replace manual testing? If it so, how?
A. Yes,it can be done manually when the project is small,having more time.
we can test with minimum number of users.

30.What is a test case?
A. A test case is a document that describes an input, action, or event and an
expected response, to determine if a feature of an application is working
correctly.

31.What is a test condition?
A. The condition required to test a feature.(pre condition)

32.What is the test script?
A. Test script is the script which is generated by an automation tool while
recording a application features

33.What is the test data?
A. Test data means the input data (valid, invalid data) giving to check the
feature of an application is working correctly.

34.What is an Inconsistent bug?
A. the bug which is occurring

35.What is the difference between Re-testing and Regression testing?
A Re-testing:Executing the same test case by giving the no. of inputs on same
build.
Regression testing:Executing the same test case on a modified build.

36.What are the different types of testing techniques?
A. 1.white Box testing 2.Black Box testing.

37.What are the different types of test case techniques?
A. 1.Equilance Partition. 2.Boundary Value Analysis. 3.Error guesing.

39.Differentiate Test bed and Test Environment?
A. Both are same.

40.What ifs the difference between defect, error, bug, failure, fault?
A. Defect: While executing the test case if u found any mismatch, then u will
report it to the development team, that is called defect.
Bug: Once the developer accepts your defect, then it is called as a bug.
Error: it may be program error or syntax error.

42. What is the difference between White & Black Box Testing?
A. White Box Testing: Based on the knowledge of the internal logic of an
application's code.Tests are based on coverage of code statements, branches,
paths, conditions.
Black Box testing:- not based on any knowledge of internal design or code.
Tests are based on requirements and functionality.

43.What is the difference between Quality Assurance and Quality Control?
A. Refer Question no.16 & 17
Quality Assurance measures the quality of processes used to create a
quality product. Quality control measures the quality of the product.

44.What is the difference between Testing and debugging?
A. The Purpose of testing is to show the program has bugs.
The Purpose of debugging is find the error/ misconception that led to failure
and implement program changes that correct the error.

45.What is the difference between bug and defect?
A. Defect:While executing the test case if u found any mismatch,the u will
report it to the development team,that is called defect.

Wednesday, July 28, 2010

5 common problems in the software development process

• Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
• Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
• Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
• Featuritis - requests to pile on new features after development is underway; extremely common.
• Miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.

Define usecase and testcase?

UseCase :
It is indepth detailed description about Customer
Requirments. it is developed from BRS/SRS. it is prepared
by business analyst or QA Lead.

Testcase:
It is a doument describing the input,action and
expected response to determine wether the application is
working correctly according to the customer requirments.
it is derived from srs,usecases and test scenarios. while
developing test cases we can find gaps in the requirments
also.

Use case is a set of scenario's of User requirement.
Test Case is a document Designed based on the Use case
To Evaluate between Expected & Actual Result.

USE cases is a pictorial representation of requirments.
Test Case is a document Designed based on the Use case
To Evaluate between Expected & Actual Result.

Tuesday, July 27, 2010

Difference between User Acceptance Test cases and System Test Cases

System Testing:
Finding the defects when the system is tested as a whole this also called as end to end testing. In this testing tester needs to test the application from log-in to log-out.

User Acceptance Testing:
User acceptance testing is to get the acceptance from the client. UAT is generally done at client's environment. Before UAT pre UAT should be done.

Web Testing Interview Questions

1. Describe some of the possible web page futures.

2. Question: What are possible configurations that could affect the testing strategy of any web site?
Answer: Hardware platform (PC, Mac), Browser software and version, Browser Plug-Ins, Browser settings options, Video resolution and Color Depth, text size

3. What is website usability testing.
Ans:- Usability testing is a technique used to evaluate a product by testing it on users…Testing to check “User Friendliness”

4. Why spell checker is not enough during testing spelling on web page?
Ans:- Because there are Pictures/Logos/Photos that may contain wrong spellings and the spell checker is not able to identify those errors.

5. Question: Name a few website mistakes that could cause configuration and compatibility bugs.
Answer: Non standard colors, frames, tables etc.

6. Question: What the latest web technologies do you know?
Answer: RSS

7.Question: Name some Alternative Browsers.
Answer: Opera, Mozilla

8. Describe some Caching Issues.
:-Cache is
A) HTML meta tags to prevent page caching. Every single possible combination and none work.
B) PHP Headers to prevent page caching. Every single possible combination and none work.
C)Proxy server caching issues.
NOTE:-So its slow down the overall Performance of the system. So we used to clean up the old cache time by time.

9. What HTML standard?

10.What web specific types of testing?

11.What is static and what is dynamic web page?

12.Question: Can the Netscape scripting host without plug-ins run Microsofts JScript on the client side?
Answer: No

13. Question: What is Glyphs?
Answer: Glyph is the picture of the character.

14.What techniques will cause double-byte problems to show up?
Ans:- ColdFusion MX is not compatible for unixs(Solaris).

15. What is Alt Key Input?

16. How will you decide when and what to test during performance testing?

17. Question: What types of web testing security problems do you know?
Answer:(Denial of Service (DoS) attack, buffer overflow)

18. Question: What types HTTP Response Codes do you know?
Answer: ( 2xx – success, 3xx – Redirection, 4xx – Client Error, 5xx – Server Error)

19. What errors can occur when the page loads?

20. What are the metrics what can be used during performance testing?

21. What HTML file extension can be used?

22. What are the differences between testing WEB application and testing client-sever
application?

23. Write test cases for Search Engine?

24. Difference between HTTP and HTTPS? Explain how the data is secured in
HTTPS?

25. What does DNS contains?

26. What is the difference between authentication and authorization?

27. What type of security testing you performed?

28. Difference between GUI and Usability Testing

What is difference between client server and Web Testing?

In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.

Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.

What is base lining?

Baselining :

Process by which the quality and cost effectiveness of a service is assessed, usually in advance of a change to the service. Baselining usually includes comparison of the service before and after the Change or analysis of trend information. The term Benchmarking is normally used if the comparison is made against other enterprises.

For example :If the company has different projects.For each project there will be seperate test plans.This test plans should be accepted by peers in the organization after modifications.That modified test plans are the baseline for the testers to use in different projects.Any further modifications are done in the test plan.Present modified becomes the baseline.Because this test plan becomes the basis for running the testing project.

Saturday, July 24, 2010

Difference between gprs and edge technology in mobile phone

Further enhancements to GSM networks are provided by Enhanced Data rates for GSM Evolution (EDGE) technology. EDGE provides up to three times the data capacity of GPRS. Using EDGE, operators can handle three times more subscribers than GPRS; triple their data rate per subscriber, or add extra capacity to their voice communications. EDGE uses the same TDMA (Time Division Multiple Access) frame structure, logic channel and 200kHz carrier bandwidth as today's GSM networks, which allows it to be overlaid directly onto an existing GSM network. For many existing GSM/GPRS networks, EDGE is a simple software-upgrade.

EDGE allows the delivery of advanced mobile services such as the downloading of video and music clips, full multimedia messaging, high-speed colour Internet access and e-mail on the move.

Thursday, July 22, 2010

Differences between Sanity and Smoke testing

SMOKE TESTING:

1. Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
2. A smoke test is scripted, either using a written set of tests or an automated test
3. A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
4. Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
5. Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

SANITY TESTING:

1. A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
2. A sanity test is usually unscripted.
3. A Sanity test is used to determine a small section of the application is still working after a minor change.
4. Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. this level of testing is a subset of regression testing.
5. Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Hope these points will help you to clearly understand the Smoke and sanity tests and will help to remove any confusion.

Tuesday, July 20, 2010

How can smoke testing be applied to mobile testing?

Smoke testing can be done on a Mobile once the latest build has been released to the testing team. Smoke test or build verification is nothing but just the basic functionality check for all the applications, test scope parameters that are involved in mobile application testing, testing very basic and important features of the mobile and also testing the showstopper issues at a higher priority level.

What is streaming?

Streaming is the process of downloading a content from the server. There are many techniques to download a content e.g. downloading, progressive downloading, streaming etc.

Downloading: Normal download and can be saved in the local machine, once saved user can play or do operation whatever he likes.
Progressing downloading: Whenever user selects a content it starts buffering and the same will be continued till the file ends playing. The content will be played once the buffer is finished. This could be even a min (e.g. if the buffer is completed for 1 sec the content will be played for one second) Streaming: When user selects a content to download, buffering takes place and once the buffer is over content will be played. Until then player does not get initiated. And user can save, play again or whatever the operation he requires.

How we test battery back up time using load testing especially for mobile testing ?

Below are given a few scenarios to test the mobile battery backup:

1. Charge the mobile up to maximum capacity.
2. Call to a number and continuously use it until the battery fully drained out. Measure the time of life for one full cycle.
3. Charge the battery up to 20% of it's capacity use it until the battery is fully drained out.
4. Charge the battery up to 40% of it's capacity use it until the battery is fully drained out.
5. Similarly check for 60%,80% of battery charge capacities and measure the battery backup time.
6. Verify the battery life by continuously sending SMSs.
7.Verify the (GUI) power indicating lines or levels correctly displaying or not with respect to actually charged battery level.
8.test the phone in non signal coverage area and measure the battery stand by time. (In non signal coverage area cell continuously searches for signals due to this the power continuously dissipated as radio frequency signals and battery drained very soon)

What is the difference between TDMA and FDMA?

TDMA is Time Division Multiple Access. In TDMA technology each user in the channel owns the channel bandwidth for some time in a round robin fashion. All GSM handsets use this technology.

FDMA is Frequency Division Multiple Access. In FDMA many users are allowed to transmit through the same channel at the same time. So each user at any particular time uses part of the bandwidth. CDMA handsets use this technology.

What is the advantage of symbian OS devices comparing with j2me and Brew OS devices?

1. Symbian is the latest OS where as j2me and Brew OSs are old ones.

2. Symbian OS has SIS file format where as j2me has JAR file format which means this is a third party software and anybody can develop softwares in this OS.

3. Symbian is a fast browsing and a fast operating system. j2me is not as fast.

4. Symbian OS provide touchscreen phones such as UIQ phones. No such facility on j2me.

5. Symbain OS has superior hardware access to the phone compared to j2me and Brew OS

6. Symbian OS offers rich UI.

Wednesday, July 14, 2010

ENTRY AND EXIT CRITERIA

Entry Criteria:

1. Business needs are confirmed.
2. Business Requirement document prepared by Analyst
3. Estimation of IT infrastructure
4. Aqusition of resources
5. Project kick off meeting is done and sign off is given

Exit Criteria:

1. All the testing has been performed as defined
2. 100% test coverage is met
3. All the bugs raised are resolved, retested and closed.
4. Product goes live with optimal quality and standards
meeting customer requriements and satisfaction

BASIC INTERVIEW QUESTIONS

1. Tell me about yourself:

The most often asked question in interviews. You need to have a short statement prepared in your mind. Be careful that it does not sound rehearsed. Limit it to work-related items unless instructed otherwise. Talk about things you have done and jobs you have held that relate to the position you are interviewing for. Start with the item farthest back and work up to the present.

2. Why did you leave your last job?

Stay positive regardless of the circumstances. Never refer to a major problem with
management and never speak ill of supervisors, co-workers or the organization. If you do, you will be the one looking bad. Keep smiling and talk about leaving for a positive reason such as an opportunity, a chance to do something special or other forward-looking reasons.

3. What experience do you have in this field?

Speak about specifics that relate to the position you are applying for. If you do not have specific experience, get as close as you can.

4. Do you consider yourself successful?

You should always answer yes and briefly explain why. A good explanation is that you have set goals, and you have met some and are on track to achieve the others.

5. What do co-workers say about you?

Be prepared with a quote or two from co-workers. Either a specific statement or a
paraphrase will work. Jill Clark, a co-worker at Smith Company, always said I was the
hardest workers she had ever known. It is as powerful as Jill having said it at the interview herself.

6. What do you know about this organization?

This question is one reason to do some research on the organization before the interview. Find out where they have been and where they are going. What are the current issues and who are the major players?

7. What have you done to improve your knowledge in the last year?

Try to include improvement activities that relate to the job. A wide variety of activities can be mentioned as positive self-improvement. Have some good ones handy to mention.

8. Are you applying for other jobs?

Be honest but do not spend a lot of time in this area. Keep the focus on this job and what you can do for this organization. Anything else is a distraction.

9. Why do you want to work for this organization?

This may take some thought and certainly, should be based on the research you have done on the organization. Sincerity is extremely important here and will easily be sensed. Relate it to your long-term career goals.

10. Do you know anyone who works for us?

Be aware of the policy on relatives working for the organization. This can affect your answer even though they asked about friends not relatives. Be careful to mention a friend only if they are well thought of.

11. What kind of salary do you need?

A loaded question. A nasty little game that you will probably lose if you answer first. So, do not answer it. Instead, say something like, That's a tough question. Can you tell me the range for this position? In most cases, the interviewer, taken off guard, will tell you. If not, say that it can depend on the details of the job. Then give a wide range.

12. Are you a team player?

You are, of course, a team player. Be sure to have examples ready. Specifics that show you often perform for the good of the team rather than for yourself are good evidence of your team attitude. Do not brag, just say it in a matter-of-fact tone. This is a key point.

13. How long would you expect to work for us if hired?

Specifics here are not good. Something like this should work: I'd like it to be a long time. Or As long as we both feel I'm doing a good job.

14. Have you ever had to fire anyone? How did you feel about that?

This is serious. Do not make light of it or in any way seem like you like to fire people. At the same time, you will do it when it is the right thing to do. When it comes to the organization versus the individual who has created a harmful situation, you will protect the organization. Remember firing is not the same as layoff or reduction in force.

15. What is your philosophy towards work?

The interviewer is not looking for a long or flowery dissertation here. Do you have strong feelings that the job gets done? Yes. That's the type of answer that works best here. Short and positive, showing a benefit to the organization.

16. If you had enough money to retire right now, would you?

Answer yes if you would. But since you need to work, this is the type of work you prefer. Do not say yes if you do not mean it.

17. Have you ever been asked to leave a position?

If you have not, say no. If you have, be honest, brief and avoid saying negative things about the people or organization involved.

18. Explain how you would be an asset to this organization

You should be anxious for this question. It gives you a chance to highlight your best points as they relate to the position being discussed. Give a little advance thought to this relationship.

19. Why should we hire you?

Point out how your assets meet what the organization needs. Do not mention any other
candidates to make a comparison.

20. Tell me about a suggestion you have made?

Have a good one ready. Be sure and use a suggestion that was accepted and was then
considered successful. One related to the type of work applied for is a real plus.

21. What irritates you about co-workers?

This is a trap question. Think real hard but fail to come up with anything that irritates you. A short statement that you seem to get along with folks is great.

22. What is your greatest strength?

Numerous answers are good, just stay positive. A few good examples:
Your ability to prioritize, Your problem-solving skills, Your ability to work under pressure, Your ability to focus on projects, Your professional expertise, Your leadership skills, Your positive attitude .

23. Tell me about your dream job.

Stay away from a specific job. You cannot win. If you say the job you are contending for is it, you strain credibility. If you say another job is it, you plant the suspicion that you will be dissatisfied with this position if hired. The best is to stay genetic and say something like: A job where I love the work, like the people, can contribute and can't wait to get to work.

24. Why do you think you would do well at this job?

Give several reasons and include skills, experience and interest.

25. What are you looking for in a job?

Stay away from a specific job. You cannot win. If you say the job you are contending for is it, you strain credibility. If you say another job is it, you plant the suspicion that you will be dissatisfied with this position if hired. The best is to stay genetic and say something like: A job where I love the work, like the people, can contribute and can't wait to get to work.

26. What kind of person would you refuse to work with?

Do not be trivial. It would take disloyalty to the organization, violence or lawbreaking to get you to object. Minor objections will label you as a whiner.

27. What is more important to you: the money or the work?

Money is always important, but the work is the most important. There is no better answer.

28. What would your previous supervisor say your strongest point is?

There are numerous good possibilities:
Loyalty, Energy, Positive attitude, Leadership, Team player, Expertise, Initiative, Patience, Hard work, Creativity, Problem solver

29. Tell me about a problem you had with a supervisor

Biggest trap of all. This is a test to see if you will speak ill of your boss. If you fall for it and tell about a problem with a former boss, you may well below the interview right there. Stay positive and develop a poor memory about any trouble with a supervisor.

30. What has disappointed you about a job?

Don't get trivial or negative. Safe areas are few but can include:
Not enough of a challenge. You were laid off in a reduction Company did not win a contract, which would have given you more responsibility.

31. Tell me about your ability to work under pressure.

You may say that you thrive under certain types of pressure. Give an example that relates to the type of position applied for.

32. Do your skills match this job or another job more closely?

Probably this one. Do not give fuel to the suspicion that you may want another job more than this one.

33. What motivates you to do your best on the job?

This is a personal trait that only you can say, but good examples are:
Challenge, Achievement, Recognition

34. Are you willing to work overtime? Nights? Weekends?

This is up to you. Be totally honest.

35. How would you know you were successful on this job?

Several ways are good measures:
You set high standards for yourself and meet them. Your outcomes are a success.Your boss tell you that you are successful

36. Would you be willing to relocate if required?

You should be clear on this with your family prior to the interview if you think there is a chance it may come up. Do not say yes just to get the job if the real answer is no. This can create a lot of problems later on in your career. Be honest at this point and save yourself future grief.

37. Are you willing to put the interests of the organization ahead of your own?

This is a straight loyalty and dedication question. Do not worry about the deep ethical and philosophical implications. Just say yes.

38. Describe your management style.

Try to avoid labels. Some of the more common labels, like progressive, salesman or
consensus, can have several meanings or descriptions depending on which management
expert you listen to. The situational style is safe, because it says you will manage according to the situation, instead of one size fits all.

39. What have you learned from mistakes on the job?

Here you have to come up with something or you strain credibility. Make it small, well intentioned mistake with a positive lesson learned. An example would be working too far ahead of colleagues on a project and thus throwing coordination off.

40. Do you have any blind spots?

Trick question. If you know about blind spots, they are no longer blind spots. Do not reveal any personal areas of concern here. Let them do their own discovery on your bad points. Do not hand it to them.

41. If you were hiring a person for this job, what would you look for?

Be careful to mention traits that are needed and that you have.

42. Do you think you are overqualified for this position?

Regardless of your qualifications, state that you are very well qualified for the position.

43. How do you propose to compensate for your lack of experience?

First, if you have experience that the interviewer does not know about, bring that up: Then, point out (if true) that you are a hard working quick learner.

44. What qualities do you look for in a boss?

Be generic and positive. Safe qualities are knowledgeable, a sense of humor, fair, loyal to subordinates and holder of high standards. All bosses think they have these traits.
45. Tell me about a time when you helped resolve a dispute between others.
Pick a specific incident. Concentrate on your problem solving technique and not the dispute you settled.

46. What position do you prefer on a team working on a project?

Be honest. If you are comfortable in different roles, point that out.

47. Describe your work ethic.

Emphasize benefits to the organization. Things like, determination to get the job done and work hard but enjoy your work are good.

48. What has been your biggest professional disappointment?

Be sure that you refer to something that was beyond your control. Show acceptance and no negative feelings.

49. Tell me about the most fun you have had on the job.

Talk about having fun by accomplishing something for the organization.

50. Do you have any questions for me?

Always have some questions prepared. Questions prepared where you will be an asset to
the organization are good. How soon will I be able to be productive? and What type of
projects will I be able to assist on? are examples.