Tuesday, July 28, 2009

Testing School - Test cases for first name, last name , mailID, Landline number in registration form

Testing of First name and last name

Examples of field validation criteria are:

  • First and last name fields are required.
  • First and last name fields are limited to 50 characters each.
  • First and last name fields will not accept numbers.
  • First and last name fields will not accept the following characters: `~!@#$%^&*()_:";'{}[]+<>?,./

Consider the following example:

Registration Form

Enter your first and last names and click the Submit button.

First Name:

Last Name:

Submit

Here’s a set of test cases that include validation criteria:

1. Launch an IE browser and go to the Registration Form.

2. Verify the registration page opens.

3. On the registration page, click the mouse in the First Name field.

4. Leave the First Name and Last Name fields blank and click on the Submit button.

5. Verify an error message appears saying you cannot leave the First Name and Last Name fields blank.

6. Enter 50 characters in both the First and Last Name fields.

7. Verify the names are accepted.

8. Enter more than 50 characters in both the First and Last Name fields.

9. Verify an error message appears saying you cannot enter more than 50 characters in the First Name and Last Name fields.

10. Enter numbers in the First and Last Name fields.

11. Verify an error message appears saying you cannot enter numbers in the First and Last Name fields.

12. Enter the characters "`~!@#$%^&*()_:";'{}[]+<>?,./" in the First and Last Name fields.

13. Verify an error message appears saying you cannot enter "`~!@#$%^&*()_:";'{}[]+<>?,./" characters in the First and Last Name fields.

14. Type “Aman” in the First Name field.

15. Click the mouse in the Last Name field.

16. Type in “Kumar” in Last Name field.

17. Click on the Submit button.

18. Click on registration List in the left nav bar.

19. Verify the Name “Aman Kumar” is now present in the registration list.

For example, the blank name validation should be tested like this:

1. Leave the First Name and Last Name fields blank and click on the Submit button.

2. Verify an error message appears saying you cannot leave the First Name and Last Name fields blank.

3. Enter a valid First Name and leave the Last Name field blank and click on the Submit button.

4. Verify an error message appears saying you cannot leave the Last Name field blank.

5. Enter a valid Last Name and leave the First Name field blank and click on the Submit button.

6. Verify an error message appears saying you cannot leave the First Name field blank.

As you can see, this increases the number of test cases needed substantially, and even this set of test cases leaves certain issues untested.

For instance, the form should also be tested for resistance to what is known as HTML insertion attacks. If not handled properly, HTML code entered into a field will be displayed on the page when the submit button is clicked. Most code written now will handle insertion attacks, but it's always a good idea to verify that. Here's a test case for insertion attacks:

1. Enter the text "This is a test! " in the First Name field and hit submit.

2. Verify the text "This is a test!" does not appear on the registration page.

Testing of email Id:

Here is the list of valid and invalid email format which can
be used for testing
 
 
Valid Email address                                       Reason
 
email@domain.com                     Valid email
firstname.lastname@domain.com         Email contains dot in the address field
email@subdomain.domain.com             Email contains dot with   subdomain
firstname+lastname@domain.com          Plus sign is considered valid
                                      character
email@123.123.123.123                 Domain is valid IP address
email@[123.123.123.123]              Square bracket around IP address is considered valid
"email"@domain.com                 Quotes around email is considered  valid
1234567890@domain.com              Digits in address are valid
email@domain-one.com               Dash in domain name is valid
_______@domain.com                 Underscore in the address field is valid
email@domain.name                  .name is valid Top Level Domain name
email@domain.co.jp                 Dot in Top Level Domain name also
                                   considered valid (use co.jp as example here)
 
firstname-lastname@domain.com       Dash in address field is valid
 
 
 
 
 
Invalid Email address                                     Reason
 
plainaddress                           Missing @ sign and domain
#@%^%#$@#$@#.com                       Garbage
@domain.com                            Missing username
Joe Smith            Encoded html within email is
                                      invalid
email.domain.com                       Missing @
email@domain@domain.com                Two @ sign
.email@domain.com              Leading dot in address is not allowed
email.@domain.com              Trailing dot in address is not allowed
email..email@domain.com               Multiple dots
あいうえお@domain.com  Unicode char as address
email@domain.com (Joe Smith)        Text followed email is not allowed
email@domain              Missing top level domain (.com/.net/.org/etc)
email@-domain.com         Leading dash in front of domain is invalid
email@domain.web          .web is not a valid top level domain
email@111.222.333.44444                Invalid IP format
email@domain..com          Multiple dot in the domain portion is invalid

Testing of Landline phone in registration form:

In India, there are a number of valid formats for phone numbers that might be entered into a form:

nnnnnn                                234567
nn-nnnn-nnnnnn                        95-6543-234567              
nnnnn-nnnnnn                          06543-234567
nn-nnnn-nnnnnn                        91-6543-234567

There are many other variations of this, as well, but those examples should be enough to give you an idea of what we're dealing with here. Because there are so many valid formats, you're going to have to set some rules for reformatting these phone numbers:

1. All numbers will at least be in the format nn-nnnn-nnnnnn. Numbers entered without area codes will have a default area code prepended to them.

2. Any extensions will be listed as nnnnnn following the phone number.

3. If an initial 91 is added to the number, it will be removed, since users already know to dial a one before making a long distance call.

Thursday, July 23, 2009

Testing School-How to Build the Plan

How to Build the Plan

1. Analyze the product.

  • What to Analyze
    • Users (who they are and what they do)
    • Operations (what it’s used for)
    • Product Structure (code, files, etc.)
    • Product Functions (what it does)
    • Product Data (input, output, states, etc.)
    • Platforms (external hardware and software)
  • Ways to Analyze
    • Perform product/prototype walkthrough.
    • Review product and project documentation.
    • Interview designers and users.
    • Compare w/similar products.
  • Possible Work Products
    • Product coverage outline
    • Annotated specifications
    • Product Issue list
  • Status Check
    • Do designers approve of the product coverage outline?
    • Do designers think you understand the product?
    • Can you visualize the product and predict behavior?
    • Are you able to produce test data (input and results)?
    • Can you configure and operate the product?
    • Do you understand how the product will be used?
    • Are you aware of gaps or inconsistencies in the design?
    • Do you have remaining questions regarding the product?

2. Analyze product risk.

  • What to Analyze
    • Threats
    • Product vulnerabilities
    • Failure modes
    • Victim impact
  • Ways to Analyze
    • Review requirements and specifications.
    • Review problem occurrences.
    • Interview designers and users.
    • Review product against risk heuristics and quality criteria categories.
    • Identify general fault/failure patterns.
  • Possible Work Products
    • Component risk matrices
    • Failure mode outline
  • Status Check
    • Do the designers and users concur with the risk analysis?
    • Will you be able to detect all significant kinds of problems, should they occur during testing?
    • Do you know where to focus testing effort for maximum effectiveness?
    • Can the designers do anything to make important problems easier to detect, or less likely to occur?
    • How will you discover if your risk analysis is accurate?

3. Design test strategies.

  • General Strategies
    • Domain testing (including boundaries)
    • User testing
    • Stress testing
    • Regression testing
    • Sequence testing
    • State testing
    • Specification-based testing
    • Structural testing (e.g. unit testing)

  • Ways to Plan
    • Match strategies to risks and product areas.
    • Visualize specific and practical strategies.
    • Look for automation opportunities.
    • Prototype test probes and harnesses.
    • Don’t over plan. Let testers use their brains.
  • Possible Work Products
    • Itemized statement of each test strategy chosen and how it will be applied.
    • Risk/task matrix.
    • List of issues or challenges inherent in the chosen strategies.
    • Advisory of poorly covered parts of the product.
    • Test cases (if required)
  • Status Check
    • Do designers concur with the test strategy?
    • Has the strategy made use of every available resource and helper?
    • Is the test strategy too generic could it just as easily apply to any product?
    • Will the strategy reveal all important problems?

4. Plan logistics.

  • Logistical Areas
    • Test effort estimation and scheduling
    • Testability engineering
    • Test team staffing (right skills)
    • Tester training and supervision
    • Tester task assignments
    • Product information gathering and management
    • Project meetings, communication, and coordination
    • Relations with all other project functions, including development
    • Test platform acquisition and configuration
  • Possible Work Products
    • Issues list
    • Project risk analysis
    • Responsibility matrix
    • Test schedule
    • Agreements and protocols
    • Test tools and automation
    • Stubbing and simulation needs
    • Test suite management and maintenance
    • Build and transmittal protocol
    • Test cycle administration
    • Problem reporting system and protocol
    • Test status reporting protocol
    • Code freeze and incremental testing
    • Pressure management in end game
    • Sign-off protocol
    • Evaluation of test effectiveness
  • Status Check
    • Do the logistics of the project support the test strategy?
    • Are there any problems that block testing?
    • Are the logistics and strategy adaptable in the face of foreseeable problems?
    • Can you start testing now and sort out the rest of the issues later?

5. Share the plan.

  • Ways to Share
    • Engage designers and stakeholders in the test planning process.
    • Actively solicit opinions about the test plan.
    • Do everything possible to help the developers succeed.
    • Help the developers understand how what they do impacts testing.
    • Talk to technical writers and technical support people about sharing quality information.
    • Get designers and developers to review and approve all reference materials.
    • Record and reinforce agreements.
    • Get people to review the plan in pieces.
    • Improve review ability by minimizing unnecessary text in test plan documents.
  • Goals
    • Common understanding of the test process.
    • Common commitment to the test process.
    • Reasonable participation in the test process.
    • Management has reasonable expectations about the test process.
  • Status Check
    • Is the project team paying attention to the test plan?
    • Does the project team, especially first line management, understand the role of the test team?
    • Does the project team feel that the test team has the best interests of the project at heart?
    • Is there an adversarial or constructive relationship between the test team and the rest of the project?
    • Does any member of the project team feel that the testers are “off on a tangent” rather than focused on important testing tasks?

Wednesday, July 22, 2009

Testing School-Test automation FRAMEWORK

Test automation FRAMEWORK

This section gives the introduction about test automation framework, various types of the framework and the analysis of the best suitable framework for the application under test (the application under test is referred as AUT). This also includes the detailed description of the format of the input (the input to the framework is referred as the test tables) that is given to our test automation framework.

1.1 RECORD/PLAYBACK MYTH

The test automation tool vendors market their product as the main feature of the tool is the ability to capture the user actions and later to playback them. Here is the basic paradigm for GUI-based automated regression testing – the so called Record/Playback method (also called as Capture/Replay approach)

1. Design a test case in the test management tool.

2. Using the capture feature of the automation testing tool record the user actions. The result is a macro-like script where each user action is presented.

3. Enhance the recorded script with verification points, where some property or data is verified against an existing baseline. Add delay and wait states points where the different actions are synchronized.

4. Playback the scripts and observe the results in the log of the test management tool.

The basic drawback in this method is the scripts resulting from this method contain hard-coded values which must change if anything at all changes in our AUT. The costs associated with maintaining such scripts are astronomical, and unacceptable. These scripts are not reliable, even if the application has not changed, and often fail on replay (pop-up windows, messages, and other things can happen that did not happen when the test was recorded).

If the tester makes an error entering data, etc., the test must be re-recorded. If the application changes the test must be re-recorded. All that is being tested are things that already work. Areas that have errors are encountered in the recording process (which is manual testing, after all). These bugs are reported, but a script cannot be recorded until the software is corrected. So logically nothing is tested by this approach.


So, avoid using "Record/Playback" as a method of automating testing. This method is fraught with problems, and is the most costly (time consuming) of all methods over the long term. The record/playback feature of the test tool is useful for determining how the tool is trying to process or interact with the application under test, and can give us some ideas about how to develop your test scripts, but beyond that, its usefulness ends quickly.

1.2 TYPES OF TEST AUTOMATION FRAMEWORKS

As we have eliminated Record/Playback method, let us explore about the existing automation methodologies. There are several test automation frameworks available, among these the selection is made based on the factors such as reusability of both the scripts and the test assets. The different test automation frameworks available are as follows,

Ø Test Script Modularity

Ø Test Library Architecture

Ø Data-Driven Testing

Ø Keyword-Driven or Table-Driven Testing

Ø Hybrid Test Automation

1.2.1 Test Script Modularity

The test script modularity framework is the most basic of the frameworks. It's a well-known programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application. This insulates the application from modifications in the component and provides modularity in the application design. When working with test scripts (in any language or proprietary environment) this can be achieved by creating small, independent scripts that represent modules, sections, and functions of the application-under-test. Then these small scripts are taken and combined them in a hierarchical fashion to construct larger tests. The use of this framework will yield a higher degree of modularization and add to the overall maintainability of the test scripts.

1.2.2 Test Library Architecture

The test library architecture framework is very similar to the test script modularity framework and offers the same advantages, but it divides the application-under-test into procedures and functions (or objects and methods depending on the implementation language) instead of scripts. This framework requires the creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent modules, sections, and functions of the application-under-test. These library files are then called directly from the test case script. Much like script modularization this framework also yields a high degree of modularization and adds to the overall maintainability of the tests.

1.2.3 Data-Driven Testing

A data-driven framework is where test input and output values are read from data files (ODBC sources, CVS files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. This is similar to table-driven testing (which is discussed shortly) in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. In data-driven testing, only test data is contained in the data files.

1.2.3.1 Merits of data-driven testing

The merits of the Data-Driven test automation framework are as follows,

Ø Scripts may be developed while application development is still in progress

Ø Utilizing a modular design, and using files or records to both input and verify data, reduces redundancy and duplication of effort in creating automated test scripts

Ø If functionality changes, only the specific "Business Function" script needs to be updated

Ø Data input/output and expected results are stored as easily maintainable text records.

Ø Functions return "TRUE" or "FALSE" values to the calling script, rather than aborting, allowing for more effective error handling, and increasing the robustness of the test scripts. This, along with a well-designed "recovery" routine, enables "unattended" execution of test scripts.

1.2.3.2 Demerits of data-driven testing

The demerits of the Data-Driven test automation framework are as follows,

Ø Requires proficiency in the Scripting language used by the tool (technical personnel)

Ø Multiple data-files are required for each Test Case. There may be any number of data-inputs and verifications required, depending on how many different screens are accessed. This usually requires data-files to be kept in separate directories by Test Case

Ø Tester must not only maintain the Detail Test Plan with specific data, but must also re-enter this data in the various required data-files

Ø If a simple "text editor" such as Notepad is used to create and maintain the data-files, careful attention must be paid to the format required by the scripts/functions that process the files, or script-processing errors will occur due to data-file format and/or content being incorrect

1.2.4 Keyword-Driven Testing

This requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. In this method, the entire process is data-driven, including functionality.

1.2.4.1 Example

In order to open a window, the following table is devised, and it can be used for any other application, just it requires just changing the window name.

Test Table for Opening a Window

Window

Control

Action

Arguments

Window Name

Menu

Click

File, Open

Window Name

Menu

Click

Close

Window Name

Pushbutton

Click

Folder Name

Window Name

Verify

Results

Once creating the test tables, a driver script or a set of scripts is written that reads in each step executes the step based on the keyword contained the Action field, performs error checking, and logs any relevant information.

1.2.4.2 Merits of keyword driven testing

The merits of the Keyword Driven Testing are as follows,

Ø The Detail Test Plan can be written in Spreadsheet format containing all input and verification data.

Ø If "utility" scripts can be created by someone proficient in the automated tool’s Scripting language prior to the Detail Test Plan being written, then the tester can use the Automated Test Tool immediately via the "spreadsheet-input" method, without needing to learn the Scripting language.

Ø The tester need only learn the "Key Words" required, and the specific format to use within the Test Plan. This allows the tester to be productive with the test tool very quickly, and allows more extensive training in the test tool to be scheduled at a more convenient time.

1.2.4.3 Demerits of keyword driven testing

The demerits of the Keyword Driven Testing are as follows,

Ø Development of "customized" (Application-Specific) Functions and Utilities requires proficiency in the tool’s Scripting language. (Note that this is also true for any method)

Ø If application requires more than a few "customized" Utilities, this will require the tester to learn a number of "Key Words" and special formats. This can be time-consuming, and may have an initial impact on Test Plan Development. Once the testers get used to this, however, the time required to produce a test case is greatly improved.

1.2.5 Hybrid Test Automation Framework

The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects. The most successful automation frameworks generally accommodate both Keyword-Driven testing as well as Data-Driven scripts.

This allows data driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword driven architecture. The framework utilities can make the data driven scripts more compact and less prone to failure than they otherwise would have been.

The utilities can also facilitate the gradual and manageable conversion of existing scripts to keyword driven equivalents when and where that appears desirable. On the other hand, the framework can use scripts to perform some tasks that might be too difficult to re-implement in a pure keyword driven approach, or where the keyword driven capabilities are not yet in place. The following sections describe its architecture, merits and demerits.


1.2.5.1 Hybrid Test Automation Framework Architecture

The framework is defined by the Core Data Driven Engine, the Component Functions, and the Support Libraries. While the Support Libraries provide generic routines useful even outside the context of a keyword driven framework, the core engine and component functions are highly dependent on the existence of all three elements.

The test execution starts with the LAUNCH TEST(1) script. This script invokes the Core Data Driven Engine by providing one or more High-Level Test Tables to CycleDriver(2). CycleDriver processes these test tables invoking the SuiteDriver(3) for each Intermediate-Level Test Table it encounters. SuiteDriver processes these intermediate-level tables invoking StepDriver(4) for each Low-Level Test Table it encounters. As StepDriver processes these low-level tables it attempts to keep the application in synch with the test. When StepDriver encounters a low-level command for a specific component, it determines what Type of component is involved and invokes the corresponding Component Function(5) module to handle the task.

Rounded Rectangle: The Application Map is referred to App Map. This App Map in WRAFS is the Application Map file created from GUI Map of WinRunner

All of these elements of the framework rely on the information provided in the App Map to interface or bridge the automation framework with the application being tested. The App Map is the only means by which the framework could identify the objects in the application under test. Each of these elements is described in more detail in the following sections. The following figure shows the diagrammatic representation of the Hybrid Test Automation Framework.

Automation Framework Design

Rounded Rectangle: Hybrid Test Automation Framework

APPLICATION MAP

The Application Map is one of the most critical components, which is used for mapping the objects from names humans can recognize to a data format useful for the automation tool. For a given project it is needed to define a naming convention or specific names for each component in each window as well as a name for the window itself. Then use the Application Map to associate that name to the identification method needed by the automation tool to locate and properly manipulate the correct object in the window.

Application Map not only gives the ability to provide useful names for the objects, it also enables the scripts and keyword driven tests to have a single point of maintenance on the object identification strings. Thus, if a new version of an application changes the title of the window or label of the components or the index of an image element within it, they should not affect the test tables. The changes will require only a quick modification in one place--inside the Application Map.

COMPONENT FUNCTIONS

Component Functions are those functions that actively manipulate or interrogate component objects. In test automation framework there are different Component Function modules for each type of component that are encountered (Window, CheckBox, TextBox, Image, Link, etc,).

Component Function modules are the application-independent extensions applied to the functions already provided by the automation tool. However, unlike those provided by the tool, the extra code to help with error detection, error correction, and synchronization are added. These modules can readily use the application-specific data stored in the Application Map and test tables as necessary. In this way, these Component Functions are developed once and are used again and again by every application tested.

Another benefit from Component Functions is that they provide a layer of insulation between the application and the automation tool. Without this extra layer, changes or "enhancements" in the automation tool itself can break existing scripts and the table driven tests. Each Component Function modules will define the keywords or "action words" that are valid for the particular component type it handles.

The component Functions takes the windows name in which the component resides, the actual component name on which the action is to be performed, the values needed for performing the action and the type of action to be performed as its arguments. The Component Function keywords and their arguments define the low-level vocabulary and individual record formats will be used to develop the test tables.

TEST TABLES

The input to the framework apart from the application map are the test tables, which holds the arguments needed for the Component Functions and other information. There are three levels in which the test tables are organized, they are as follows,

Ø Low-Level Test Tables (or) Step Tables

Ø Intermediate-Level Test Tables (or) Suite Tables

Ø High-Level Test Tables (or) Cycle Tables.

LOW-LEVEL TEST TABLES

Low-level Test Tables or Step Tables contain the detailed step-by-step instructions of the tests. Using the object names found in the Application Map, and the vocabulary defined by the Component Functions; these tables specify what window, what component, and what action to take on the component. The columns in the Step Tables are as follows,

Ø Action Command

Ø Window Name

Ø Component Name

Ø Values Need to Perform the Specified Action

The StepDriver module is the one that initially parses and routes all low-level instructions that ultimately drive our application.

INTERMEDIATE-LEVEL TEST TABLES

Intermediate-level Test Tables or Suite Tables do not normally contain such low-level instructions. Instead, these tables typically combine Step Tables into Suites in order to perform more useful tasks. The same Step Tables may be used in many Suites. In this way the minimum numbers of Step Tables necessary are developed. Then they are organized in Suites according to the purpose and design of the tests, for maximum reusability. The columns in the Suite Tables are as follows,

Ø Step Table Name

Ø Specific Arguments to be Passed to the Step Tables

The Suite Tables are handled by the SuiteDriver module which parses each record in the Suite Table and passes each Step Table to the StepDriver module for processing.

HIGHER-LEVEL TEST TABLES

High-level Test Tables or Cycle Tables combine intermediate-level Suites into Cycles.

The Suites can be combined in different ways depending upon the testing Cycle which is efficient to execute. Each Cycle will likely specify a different type or number of tests. The columns in the Cycle Tables are as follows,

Ø Suite Table Name

Ø Specific Arguments to be Passed to the Suite Table

These Cycles are handled by the CycleDriver module which passes each Suite to SuiteDriver for processing.

CORE DATA DRIVEN ENGINE

The Core Data Driven Engine is the primary part of the framework and it has three main modules, they are as follows

Ø StepDriver

Ø SuiteDriver

Ø CycleDriver

CycleDriver processes Cycles, which are high-level tables listing Suites of tests to execute. CycleDriver reads each record from the Cycle Table, passing SuiteDriver each Suite Table it finds during this process. SuiteDriver processes these Suites, which are intermediate-level tables listing Step Tables to execute. SuiteDriver reads each record from the Suite Table, passing StepDriver each Step Table it finds during this process. The following figure represents the Core Data Driven Engine,

Core Data Driven Engine Box

Core Data Drive Engine

StepDriver processes these Step Tables, which are records of low-level instructions developed in the keyword vocabulary of the Component Functions. StepDriver parses these records and performs some initial error detection, correction, and synchronization making certain that the window and\or the component planned to manipulate is available and active. StepDriver then routes the complete instruction record to the appropriate Component Function for final execution.

SUPPORT LIBRARIES

The Support Libraries are the general-purpose routines and utilities that let the overall automation framework do what it needs to do. They are the modules that provide services like,

Ø File Handling

Ø String Handling

Ø Buffer Handling

Ø Variable Handling

Ø Database Access

Ø Logging Utilities

Ø System\Environment Handling

Ø Application Mapping Functions

Ø System Messaging or System API Enhancements and Wrappers

They also provide traditional automation tool scripts access to the features of our automation framework including the Application Map functions and the keyword driven engine itself. Both of these items can vastly improve the reliability and robustness of these scripts until such time that they can be converted over to keyword driven test tables.