Monday, December 28, 2009

Testing School--Web Application - All Checklists

Web Application - All Checklists
Web Application - Interface and Compatibility Checklist
Testing web application is certainly different than testing desktop or any other application. With in web applications, there are certain standards which are followed in almost all the applications. Having these standards makes life easier for use, because these standards can be converted into checklist and application can be tested easily against the checklist.
3. INTERFACE AND ERROR HANDLING
3.1 SERVER INTERFACE

3.1.1 Verify that communication is done correctly, web server-application server, application server-database server and vice versa.
3.1.2 Compatibility of server software, hardware, network connections
3.2 EXTERNAL INTERFACE

3.2.1 Have all supported browsers been tested?
3.2.2 Have all error conditions related to external interfaces been tested when external application is unavailable or server inaccessible?

3.3 INTERNAL INTERFACE

3.3.1 If the site uses plug-ins, can the site still be used without them?
3.3.2 Can all linked documents be supported/opened on all platforms (i.e. can Microsoft Word be opened on Solaris)?
3.3.3 Are failures handled if there are errors in download?
3.3.4 Can users use copy/paste functionality?Does it allows in password/CVV/credit card no field?
3.3.5 Are you able to submit unencrypted form data?
3.4 INTERNAL INTERFACE

3.4.1 If the system does crash, are the re-start and recovery mechanisms efficient and reliable?
3.4.2 If we leave the site in the middle of a task does it cancel?
3.4.3 If we lose our Internet connection does the transaction cancel?
3.4.4 Does our solution handle browser crashes?
3.4.5 Does our solution handle network failures between Web site and application servers?
3.4.6 Have you implemented intelligent error handling (from disabling cookies, etc.)?
4. COMPATIBILITY
4.1 BROWSERS

4.1.1 Is the HTML version being used compatible with appropriate browser versions?
4.1.2 Do images display correctly with browsers under test?
4.1.3 Verify the fonts are usable on any of the browsers
4.1.4 Is Java Code/Scripts usable by the browsers under test?
4.1.5 Have you tested Animated GIFs across browsers?

4.2 VIDEO SETTINGS

4.2.1 Screen resolution (check that text and graphic alignment still work, font are readable etc.) like 1024 by 768, 600x800, 640 x 480 pixels etc
4.2.2 Colour depth (256, 16-bit, 32-bit)

4.3 CONNECTION SPEED

4.3.1 Does the site load quickly enough in the viewer's browser within 8 Seconds?

4.4 PRINTERS

4.4.1 Text and image alignment
4.4.2 Colours of text, foreground and background
4.4.3 Scalability to fit paper size
4.4.4 Tables and borders
4.4.5 Do pages print legibly without cutting off text?
Web Application - Functional Testing Checklist
Templates - Checklist Guidelines
Testing web application is certainly different than testing desktop or any other application. With in web applications, there are certain standards which are followed in almost all the applications. Having these standards makes life easier for use, because these standards can be converted into checklist and application can be tested easily against the checklist.

1. FUNCTIONALITY
1.1 LINKS
1.1.1 Check that the link takes you to the page it said it would.
1.1.2 Ensure to have no orphan pages (a page that has no links to it)
1.1.3 Check all of your links to other websites
1.1.4 Are all referenced web sites or email addresses hyperlinked?
1.1.5 If we have removed some of the pages from our own site, set up a custom 404 page that redirects your visitors to your home page (or a search page) when the user try to access a page that no longer exists.
1.1.6 Check all mailto links and whether it reaches properly

1.2 FORMS

1.2.1 Acceptance of invalid input
1.2.2 Optional versus mandatory fields
1.2.3 Input longer than field allows
1.2.4 Radio buttons
1.2.5 Default values on page load/reload(Also terms and conditions should be disabled)
1.2.6 Is Command Button can be used for HyperLinks and Continue Links ?
1.2.6 Is all the datas inside combo/list box are arranged in chronolgical order?
1.2.7 Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place?
1.2.8 Does a scrollbar appear if required?

1.3 DATA VERIFICATION AND VALIDATION

1.3.1 Is the Privacy Policy clearly defined and available for user access?
1.3.2 At no point of time the system should behave awkwardly when an invalid data is fed
1.3.3 Check to see what happens if a user deletes cookies while in site
1.3.4 Check to see what happens if a user deletes cookies after visiting a site
2. APPLICATION SPECIFIC FUNCTIONAL REQUIREMENTS
2.1 DATA INTEGRATION

2.1.1 Check the maximum field lengths to ensure that there are no truncated characters?
2.1.2 If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
2.1.3 If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.

2.2 DATE FIELD CHECKS

2.2.1 Assure that leap years are validated correctly & do not cause errors/miscalculations.
2.2.2 Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
2.2.3 Is copyright for all the sites includes Yahoo co-branded sites are updated

2.3 NUMERIC FIELDS

2.3.1 Assure that lowest and highest values are handled correctly.
2.3.2 Assure that numeric fields with a blank in position 1 are processed or reported as an error.
2.3.3 Assure that fields with a blank in the last position are processed or reported as an error an error.
2.3.4 Assure that both + and - values are correctly processed.
2.3.5 Assure that division by zero does not occur.
2.3.6 Include value zero in all calculations.
2.3.7 Assure that upper and lower values in ranges are handled correctly. (Using BVA)

2.4 ALPHANUMERIC FIELD CHECKS

2.4.1 Use blank and non-blank data.
2.4.2 Include lowest and highest values.
2.4.3 Include invalid characters & symbols.
2.4.4 Include valid characters.
2.4.5 Include data items with first position blank.
2.4.6 Include data items with last position blank.
Web Application UI Checklist
Templates - Checklist Guidelines
Testing user interface for web application is slightly different from testing user interface of traditional applications. Irrespective of the web application there are certain things which should be tested for every web application. Following checklist will give some information on items that should be tested to ensure quality of the user interface of your web application.

1.1 COLORS
1.1.1 Are hyperlink colors standard?
1.1.2 Are the field backgrounds the correct color?
1.1.3 Are the field prompts the correct color?
1.1.4 Are the screen and field colors adjusted correctly for non-editable mode?
1.1.5 Does the site use (approximately) standard link colors?
1.1.6 Are all the buttons are in standard format and size?
1.1.7 Is the general screen background the correct color?
1.1.8 Is the page background (color) distraction free?

1.2 CONTENT

1.2.1 All fonts to be the same
1.2.2 Are all the screen prompts specified in the correct screen font?
1.2.3 Does content remain if you need to go back to a previous page, or if you move forward to another new page?
1.2.4 Is all text properly aligned?
1.2.5 Is the text in all fields specified in the correct screen font?
1.2.6 Is all the heading are left aligned
1.2.7 Does the first letter of the second word appears in lowercase? Eg:

1.3 IMAGES

1.3.1 Are all graphics properly aligned?
1.3.2 Are graphics being used the most efficient use of file size?
1.3.3 Are graphics optimized for quick downloads?
1.3.4 Assure that command buttons are all of similar size and shape, and same font & font size.
1.3.5 Banner style & size & display exact same as existing windows
1.3.6 Does text wrap properly around pictures/graphics?
1.3.7 Is it visually consistent even without graphics?

1.4 INSTRUCTIONS

1.4.1 Is all the error message text spelt correctly on this screen?
1.4.2 Is all the micro-help text(i.e tool tip) spelt correctly on this screen?
1.4.3 Microhelp text(i.e tool tip) for every enabled field & button
1.4.4 Progress messages on load of tabbed(active screens) screens

1.5 NAVIGATION

1.5.1 Are all disabled fields avoided in the TAB sequence?
1.5.2 Are all read-only fields avoided in the TAB sequence?
1.5.3 Can all screens accessible via buttons on this screen be accessed correctly?
1.5.4 Does a scrollbar appear if required?
1.5.5 Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
1.5.6 Is there a link to home on every single page?
1.5.7 On open of tab focus will be on first editable field
1.5.8 When an error message occurs does the focus return to the field in error when the user cancels it?

1.6 USABILITY

1.6.1 Are all the field prompts spelt correctly?
1.6.2 Are fonts too large or too small to read?
1.6.3 Are names in command button & option box names are not abbreviations.
1.6.4 Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box"
1.6.5 Can the typical user run the system without frustration?
1.6.6 Do pages print legibly without cutting off text?
1.6.7 Does the site convey a clear sense of its intended audience?
1.6.8 Does the site have a consistent, clearly recognizable "look-&-feel"?
1.6.9 Does User cab Login Member Area with both UserName/Email ID ?
1.6.9 Does the site look good on 640 x 480, 600x800 etc.?
1.6.10 Does the system provide or facilitate customer service? i.e. responsive, helpful, accurate?
1.6.11 Is all terminology understandable for all of the site’s intended users?
Performance & Security Testing Checklist
Templates - Checklist Guidelines
Crating checklists for performance & security is extremely important. This checklist helps in better definition of performance and security requirement. In the absence of properly defined performance & security testing requirements, teams can spend great deal time in things which probably does not matter much.
1.1 LOAD
1.1.1 Many users requesting a certain page at the same time or using the site simultaneously
1.1.2 Increase the number of users and keep the data constant
1.1.3 Does the home page load quickly? within 8 seconds
1.1.4 Is load time appropriate to content, even on a slow dial-in connection?
1.1.5 Can the site sustain long periods of usage by multiple users?
1.1.6 Can the site sustain long periods of continuous usage by 1 user?
1.1.7 Is page loading performance acceptable over modems of different speeds?
1.1.8 Does the system meet its goals for response time, throughput, and availability?
1.1.9 Have you defined standards for response time (i.e. all screens should paint within 10 seconds)?
1.1.10 Does the system operate in the same way across different computer and network configurations, platforms and environments, with different mixes of other applications?
1.2 VOLUME
1.2.1 Increase the data by having constant users
1.2.2 Will the site allow for large orders without locking out inventory if the transaction is invalid?
1.2.3 Can the site sustain large transactions without crashing?
1.3 STRESS
1.3.1 Increase both number of users and the data
1.3.2 Performance of memory, CPU, file handling etc.
1.3.3 Error in software, hardware, memory errors (leakage, overwrite or pointers)
1.3.4 Is the application or certain features going to be used only during certain periods of time or will it be used continuously 24 hours a day 7 days a week? Test that the application is able to perform during those conditions. Will downtime be allowed or is that out of the question?
1.3.5 Verify that the application is able to meet the requirements and does not run out of memory or disk space.
1.4 SECURITY
1.4.1 Is confidentiality/user privacy protected?
1.4.2 Does the site prompt for user name and password?
1.4.3 Are there Digital Certificates, both at server and client?
1.4.4 Have you verified where encryption begins and ends?
1.4.5 Are concurrent log-ons permitted?
1.4.6 Does the application include time-outs due to inactivity?
1.4.7 Is bookmarking disabled on secure pages?
1.4.8 Does the key/lock display on status bar for insecure/secure pages?
1.4.9 Is Right Click, View, Source disabled?
1.4.10 Are you prevented from doing direct searches by editing content in the URL?
1.4.11 If using Digital Certificates, test the browser Cache by enrolling for the Certificate and completing all of the required security information. After completing the application and installation of the certificate, try using the <-- BackSpace key to see if that security information is still residing in Cache. If it is, then any user could walk up to the PC and access highly sensitive Digital Certificate security information.
1.4.12 Is there an alternative way to access secure pages for browsers under version 3.0, since SSL is not compatible with those browsers?
1.4.13 Do your users know when they are entering or leaving secure portions of your site?
1.4.14 Does your server lock out an individual who has tried to access your site multiple times with invalid login/password information?
1.4.15 Test both valid and invalid login names and passwords. Are they case sensitive? Is there a limit to how many tries that are allowed? Can it be bypassed by typing the URL to a page inside directly in the browser?
1.4.16 What happens whentime out is exceeded? Are users still able to navigate through the site?
1.4.17 Relevant information is written to the logfiles and that the information is traceable.
1.4.18 In SSL verify that the encryption is done correctly and check the integrity of the information.
1.4.19 Scripting on the server is not possible to plan or edit scripts without authorisation.
1.4.20 Have you tested the impact of Secure Proxy Server?
1.4.21 Test should be done to ensure that the Load Balancing Server is taking the session information of Server A and pooling it to Server B when A goes down.
1.4.22 Have you verified the use of 128-bit Encryption?
GUI Testing Checklist
Templates - Checklist Guidelines
Purpose of this GUI Testing Checklist is to help you understand how your application can be tested according to the known and understood standards for GUI. This checklist can give some guidance to the development and QE, both the teams. Development team can make sure that during the development they follow guidelines related to the compliance, aesthetics, navigation etc. but onus of testing GUI is on the QE team and as a tester it is your responsibility to validate your product against GUI standards followed by your organization.
This GUI test checklist can ensure that all the GUI components are thoroughly tested. In the first part of this checklist, we will cover Windows compliance standard and some test ideas for field specific tests.
Windows Compliance Standards
These compliance standards are followed by almost all the windows based application. Any variance from these standards can result into inconvenience to the user. This compliance must be followed for every application. These compliances can be categorized according to following criteria
1. Compliance for each application
1. Application should be started by double clicking on the icon.
2. Loading message should have information about application name, version number, icon etc.
3. Main window of application should have same caption as the icon in the program manager.
4. Closing of the application should result in “Are you sure?” message.
5. Behaviour for starting application more than once must be specified.
6. Try to start application while it is loading
7. On every application, if application is busy it should show hour glass or some other mechanism to notify user that it is processing.
8. Normally F1 button is used for help. If your product has help integrated, it should come by pressing F1 button.
9. Minimize and restoring functionality should work properly
1. Compliance for each window in the application
1. Window caption for every application should have application name and window name. Specially, error messages.
2. Title of the window and information should make sense to the user.
3. If screen has control menu, use the entire control menu like move, close, resize etc.
4. Text present should be checked for spelling and grammar.
5. If tab navigation is present, TAB should move focus in forward direction and SHIFT+TAB in backward direction.
6. Tab order should be left to right and top to bottom within a group box.
7. If focus is present on any control, it should be presented by dotting lines around it.
8. User should not be able to select greyed or disabled control. Try this using tab as well as mouse.
9. Text should be left justified
10. In general, all the operations should have corresponding key board shortcut key for this.
11. All tab buttons should have distinct letter for it.
1. Text boxes
1. Move mouse to textbox and it should be changed to insert bar for editable text field and should remain unchanged for non-editable text field.
2. Test overflowing textbox by inserting as many characters as you can in the text field. Also test width of the text field by entering all capital W.
3. Enter invalid characters, special characters and make sure that there is no abnormality.
4. User should be able to select text using Shift + arrow keys. Selection should be possible using mouse and double click should select entire text in the text box.
1. Radio Buttons
1. Only one should be selected from the given option.
2. User should be able to select any button using mouse or key board
3. Arrow key should set/unset the radio buttons.
1. Check boxes
1. User should be able to select any combination of checkboxes
2. Clicking mouse on the box should set/unset the checkbox.
3. Spacebar should also do the same
1. Push Buttons
1. All buttons except OK/Cancel should have a letter access to them. This is indicated by a letter underlined in the button text. The button should be activated by pressing ALT
2. Clicking each button with mouse should activate it and trigger required action.
3. Similarly, after giving focus SPACE or RETURN button should also do the same.
4. If there is any Cancel button on the screen, pressing Esc should activate it.
1. Drop down list boxes
1. Pressing the arrow should give list of options available to the user. List can be scrollable but user should not be able to type in.
2. Pressing Ctrl-F4 should open the list box.
3. Pressing a letter should bring the first item in the list starting with the same letter.
4. Items should be in alphabetical order in any list.
5. Selected item should be displayed on the list.
6. There should be only one blank space in the dropdown list.
1. Combo Box
1. Similar to the list mentioned above, but user should be able to enter text in it.
1. List Boxes
1. Should allow single select, either by mouse or arrow keys.
2. Pressing any letter should take you to the first element starting with that letter
3. If there are view/open button, double clicking on icon should be mapped to these behaviour.
4. Make sure that all the data can be seen using scroll bar.
Hope this checklist will help you in testing your GUI components in a better way. In next template, TestingGeek will discuss information about field specific tests and usage of shortcuts in the application GUI.

Sunday, December 27, 2009

Testing School--Testing Cookies in Web Applications

Testing Cookies in Web Applications
A "cookie" is a small piece of information sent by a web server to store on a web browser so it can later be read back from that browser. This is useful for having the browser remember some specific information. These are small data files which act as unique identifiers and allow our site to remember a particular user. Cookies do not harm computer. Certain areas of our web site, such as our forums use cookies. Some times user’s personal information is stored in cookies and if someone hacks the cookie then hacker can get access to your personal information. Even corrupted cookies can be read by different domains and lead to security issues. This is why testing of website cookies is very important.

In this white paper, we will focus on basics of cookies world and also how to test the website cookies.
INTRODUCTION
In today’s world we use websites for numerous activities, like shopping, travel ticket booking. And here comes an important word “cookie” in the picture. Almost, everywhere cookies are used to store the information sent by web servers.

So, we will first focus on what exactly cookies are and how they work. What are cookies?

Cookie is small information stored in text file on user’s hard drive by web server. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages. An example is when a browser stores your passwords and user ID's. They are also used to store preferences of start pages, both Microsoft and Netscape use cookies to create personal start pages.

Cookies are nothing but the user’s identity and used to track where the user navigated throughout the web site pages. Why Cookie?

The communication between web browser and web server is stateless. For example if you are accessing domain http://www.example.com/1.html then web browser will simply query to example.com web server for the page 1.html. Next time if you type page as http://www.example.com/2.html then new request is send to example.com web server for sending 2.html page and web server don’t know anything about to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.
How cookies work?
To exchange information files on the web, the HTTP protocol is used. There are two types of HTTP protocol: Stateless HTTP and Stateful HTTP protocol.
Stateless system
A stateless system has no record of previous interactions and each interaction request has to be handled based entirely on the information comes with it. For eg, if we enter http://www.example.com/sample.html into our web browser’s address bar and press Enter, then conversation between the browser and the example.com web server goes like this : Web browser will simply query to example.com web server for the page sample.html.

Once the browser receives the last byte of information using HTTP, the example.com web server essentially forgets about the request data. If now, we send some other request to the web server, it will execute upon the request, without memory of the earlier request. It does not need to remember the earlier request for the response of the new request. This isn’t bad for example.com website; no harm, no foul.
Stateful system
Stateful system
Are there are cases where state does matter for a web based system? The answer is YES, and here comes the Stateful system. Stateful HTTP protocols do keep some history of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.

Whenever user visits the site or page that is using cookie, small code inside that HTML page writes a text file on users machine called cookie. When user visits the same page or domain later time this cookie is read from disk and used to identify the second visit of the same user on that domain. Expiration time is set while writing the cookie. This time is decided by the application that is going to use the cookie.
Applications where cookies are used
• Online Ordering Systems: An online ordering system could be developed using cookies that would remember what a person wants to buy, this way if a person spends three hours ordering CDs at your site and suddenly has to get off the net they could quit the browser and return weeks or even years later and still have those items in their shopping basket.
• Website Tracking: Site tracking can show you places in your website that people go to and then wander off because they don't have any more interesting links to hit. It can also give you more accurate counts of how many people have been to pages on your site.
• Shopping: Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy. What if user adds some products in their shopping cart and closes the browser window? When next time same user visits the purchase page he can see all the products he added in shopping cart in his last visit.
• Marketing: Some companies use cookies to display advertisements on user machines. Cookies control these advertisements.
• UserIds: Cookies can track user sessions to particular domain using user ID and password.
Death of a cookie!
When a web server sets a cookie into the system, it was optionally give it a “death” expiration date. When the date reaches, then the cookie gets deleted from the system.

If the web server does not give an expiration date to a cookie, then the cookie is a per-session cookie. Per-session cookies are deleted as soon as you close the current session of the browser. So, if the cookie is not having any death date, then as soon as the browser is closed, the cookie is no longer into your system.
Browser Cookie Settings
Listed below are examples of the steps taken to view your browser's cookies settings:
Changing cookie settings for Mozilla Firefox 1.5 (Adapted from the Firefox 1.5 integrated help system)
By default Firefox 1.5 accepts all cookies, including cookies which would allow a site to recognize you effectively forever. If you want to grant sites you trust the ability to store cookies permanently

Click Exceptions

Enter the site address (In this case it would be americanadoptions.com)

Click Allow.
Changing cookie settings for Internet Explorer 7
Click on the Tools menu and then click Internet Options

Click the Privacy tab, and then click Sites.
Type americanadoptions.com in the Address of Web site field.

Click Allow to always allow cookies from americanadoptions.com
Changing cookie settings for Internet Explorer 6
Click on the Tools menu and then click Internet Options

Click the Privacy tab, and then click Sites.

Type americanadoptions.com in the Address of Web site field.

Click Allow to always allow cookies from americanadoptions.com
Changing cookie settings for Netscape 6
Click Edit Menu

Click Preferences

Select Privacy & Security

Select Cookies

To view your cookie settings on a browser not listed above, refer to your browser's documentation.
Drawbacks of cookies
• Loss of site traffic: Site containing cookie will be completely disabled and can not perform any operation, if user has set browser options to warn before writing any cookie or disabled the cookies completely. And this results in loss of site traffic.
• Loads of cookies: If too many cookies are present on every page navigation and user has turned on option to warn before writing cookie, in that case this could turn away user from the web site and this could result in loss of site traffic and eventually loss of business.
• Valuable hard drive space: Cookies take up valuable hard drive space, so it may be to your advantage to delete a few on occasion, especially third-party cookies. Third-party cookies are placed on your computer by sites you haven’t visited. They usually come from companies who place ads on sites you have visited. Luckily, most browsers give you the option of rejecting only third-party cookies.
• Security: Some times user’s personal information is stored in cookies and if someone hacks the cookie then hacker can get access to your personal information. Even corrupted cookies can be read by different domains and lead to security issues. Some sites may write and store your sensitive information in cookies, which should not be allowed due to privacy concerns.
Cookie Testing
Now when we know the basics of cookie world, let’s address how to test sites that use cookies.
Disabling Cookies
This is probably the easiest way of cookie testing. What happens when all cookies are disabled? Start like this: Close all browsers delete all cookies from PC.
Now, open the website which uses cookies for actions. Now, perform the major functions in the website. Most of the time, these will not work because cookies are disabled. This isn’t a bug: disabling cookies on a site that requires cookies, disables the site’s functionality.

Is it obvious to the website user that he must have the cookies enables? Web servers are recognizing that attempts are made with disabled cookies, so, does it send a page with a normal message that cookies needs to be enabled before working?

There should not be any page crash due to disabling the cookies.
Selectively rejecting cookies
What happens when some of the cookies are accepted and some are rejected? If there are 10 cookies in web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk, delete all previously saved cookies, close all open browsers and then start the test. Try to access major functionality of web site. On the prompt window you can either accept or reject cookie. What’s happening: pages are getting crashed or data is getting corrupted?
Corrupting cookies
This is the test which will test the site! For this, we need to know the cookies the web site is saving and the information that is stored in the text files. Manually edit the cookie in notepad and change the parameters to some vague values. For eg, change the content of the cookie, change the name of the cookie, and then perform actions in the website. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com.
Cookie Encryption
There are websites, where we have no option other than saving sensitive data in cookie. Here it needs to be tested that the data stored in cookie is also getting stored in encrypted format.
Deletion of cookies
Access a website and allow it to write cookie. Now close all the browsers and manually delete the cookies. Again open the same website and try to work on it. Is it crashing?

Some times cookie written by domain say ABC.com may be deleted by same domain but by different page under that domain. This is the common case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
Multi Browser testing
This is an important case to check if web application page is writing the cookies properly on different browsers and also the web site works properly using these stored cookies.
CONCLUSION
Cookies shouldn’t be put in the same category as the viruses, spam, or spyware that are often created to wreak havoc and chaos on computers. They are mostly benign tools to help you manage your time more efficiently on the Web. Plus, you have totally control over them if you think your secrecy is being violated. Therefore, accept or reject cookies as you want. And the testing should be done properly to check that website is working with different cookie setting. For demo Amazon.com is a very good website for good quality cookie usage.

Monday, October 26, 2009

Testing School--What is test strategy?

What is test strategy?


1. Test Strategy Identifier

The unique identifier for this Test Strategy is:

2. Introduction

2.1. Purpose
The purpose of this Test Strategy is to define the overall approach that will be taken by the QA Team when delivering testing services to all of the projects within the business.
The document helps to clarify the testing activities, roles and responsibilities, processes and practice to be used across successive projects.
Where a project’s testing needs deviate from what is covered by this Test Strategy the exceptions will be detailed in the Test Plan.
3. Test Items
For each Release the QA Engineer will create a table of Test Items that will be in scope of the testing being planned. These will be identified from the Scope Items in a given Release and include interrelated modules and components of the service that will be affected by the Scope Items.
In addition the QA Engineer will record any Test Items that cannot be tested by the test team. The Test Plan will contain Test Items that are In-Scope and Out-of-Scope.
4. Test Objectives
Describe the Objective of Testing, Testing should ensure that Future Business Processes together with the enabling technology, provided the expected business benefits.

Testing Objective could include:

• Verify products against their requirements (i.e. was the product built right?)
• Validate that the product performs as expected (i.e. was the right product built?)
• Ensure system components and business processes work end- to – end
• Build a test model that can be used on an ongoing basis
• Identity and resolve issues and risks.

5. Identity Test Types:
Describe the types of tests to be conducted to verify that requirements have been met and to validate that the system performs satisfactorily. Consider the types of tests below:
Unit Testing Testing conducted to verify the implementation of the design for one software element (eg.., Unit, module)
Integration Testing An orderly progression of testing in which software elements, hardware elements, are both are combined and tested until the entire system is integrated
and tested
System Testing The process of testing an integrated hardware and software system to verify that the system meets its specified requirements.
Acceptance Testing Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or
not to accept the system
Performance Testing Performed to confirm that the system meets performance goals such as turnaround times, maximum delays, peak performance, etc.
Volume Testing Tests the system to verify that the system can handle an expected volume profile
Stress testing Tests the entire system to find the limits of performance.
Configuration Testing Tests the product over all the possible configurations on which it is supposed to run
Operational Readiness Testing Test the system to finds defects that will prevent installation and deployment by the users
Data Conversion and Load Testing Performed to verify the correctness of automated or manual conversions and/or loads of data in preparation for implementing the new system

6. Scope of Testing
Describe the scope of testing. Consider the following when defining scope:

• Test both business processes and the technical solution
• Specify regions and sub-regions included in testing
• Identity interfaces with other projects
• Identity interfaces with external entities such as dealers, suppliers, and join ventures

7. Test preparation and execution process
7.1 Test Preparation
Describe the steps for preparing for testing. The purpose of Test Preparation is to verify that requirements are understood and prepare for Test Execution. Steps for Test Preparation may include:
• Identity test cases
• Identity test cycles
• Identity test data
• Develop expected results
• Develop test schedule (may be done as a part of Test Plan)
• Obtain signoff

7.2 Test Execution
Describe the steps for executing tests. The purpose of Test Execution is to execute the test cycles and test cases created during the Test Preparation activity, compare actual results to expected results, and resolve any discrepancies. Steps for Test Execution may include:
• Verify entry criteria
• Conduct tests
• Compare actual results to expected results
• Investigate and resolve discrepancies
• Conduct regression test
• Verify exit criteria
• Obtain signoff

8. Test Data Management

Describe the approach for identifying and managing test data. Consider the following guidelines:
•System and user acceptance tests – a subset of production data could be used to initialize the test environment. Because the focus of these tests is to simulated the production environment and validate business transactions, data integrity is extremely critical.
•Performance/volume/stress test – full size production files should be used to test the performance and volume aspects of the test. Additional ‘dummy’ data will be created to stress the system. Data integrity is not critical, as the test focuses on performance rather than the ability to conduct business transactions.
•Operational readiness test – a copy of system/user acceptance test data could be used for the operational readiness test. Since the focus of the test is on operational procedures, a low number of transactions will be required and data integrity is not critical.

9. Features to be tested
The QA Engineer will use the Test Breakdown worksheet (ref#) to record all of the features to be tested for each of the Test Items in scope.
The Test Breakdowns will include details of the Test Scenarios from which the Test Cases will be derived.

10. Features not to be tested


Where it is not possible for the team to test features of a Test Item that would have been expected or that would fall under the scope of testing shown in section 10. Testing Tasks, it will be recorded in section 5 of the Test Plan.

11. Approach
All testing tasks will be conducted in line with the Software Test Life Cycle (STLC) and in support of the Software Development Life Cycle (SDLC). The documents used
within the SDLC will be completed both by the QA Team and the project participants that are responsible for providing information and deliverables to QA.

It should be decided at the start of the project if there will be a Post Implementation Review after project delivery and this should be conducted within two weeks of project completion.







11.1. Analysis & Planning Phase Entry Criteria
For all projects the following criteria need to be met before the Test Items are accepted into the Analysis & Planning Phase:
•Release scope item list is locked and prioritized
•Documentation defining the scope items are approved and at release status
•All documents are under change control processes

11.2. Analysis & Planning Phase Exit Criteria
For the Analysis & Planning phase to be completed and allow items to move into the Test Phase the following criteria need to be achieved:
•Test Breakdowns and Test Cases are written and peer reviewed
•Knowledge Share document has been completed and reviewed by the QA Engineers
•Walkthrough and sign-off completed for the Test Plan and Test Breakdowns
•Defined Test Estimate has been published and agreed
•The list of features in the Test Breakdown has been prioritized.

11.3. Test Phase Entry Criteria
Before Test Items are made available for QA to test it’s expected that:
•The Test Item Transmittal Report will be completed
•All test tools are available and test infrastructure are available for use during testing
•All Test Items are development complete
•The correct versions of the code have been deployed to the correct test environments
•Sanity and Unit tests have been completed successfully to demonstrate readiness for test
•Prepare and review all Test cases
•Establish Test Environment
•Receive build from Developer.

11.4. Test Phase Exit Criteria
For the Test Items to exit testing the following conditions will have to be met:
•The Test Summary Report will be completed.
•All planned testing activities has been completed to agreed levels.
•All high priority bugs have been fixed, retested and passed.
•No defects must be left in an open unresolved status.

11.5. Change Management
The Build Manager will ensure that once testing begins no changes or modifications are made to the code used to create the build of the product under test. The Build
Manager will inform QA against which version testing will begin and confirm the location within [VSS/Progress/Perforce/Subversion] the build is to be taken
from. If changes or modifications are necessary through bug resolution or for any other reason the Build Manager will inform QA prior to the changes being made.

11.6. Notification / Escalation Procedures
The following diagram shows the notification and escalation paths to be followed for the duration of the project Test Phase.

11.7. Measures and Metrics
At the Initiation Phase of the project the QA Team will publish a set of measures and metrics related to the test activities of their Planning & Analysis and Execution phases. The Test Plan also defines the milestone dates for key deliverables such as the Test Plan and these are metrics captured for ongoing statistical process analysis across successive projects.

Test Preparation
• Number of Test Scenarios v. Number of Test Cases
• Number of Test Cases Planned v. Ready for Execution
• Total time spent on Preparation v. Planned time

Test Execution and Progress
• Number of Tests Cases Executed v. Test Cases Planned
• Number of Test Cases Passed, Failed and Blocked
• Total Number of Test Cases Passed by Test Item / Test Requirements
• Total Time Spent on Execution vs Planned Time

Bug Analysis
• Total Number of Bugs Raised and Closed per Test Run
• Total Number of Bugs Closed v. Total Number of Bugs

Re-Opened
• Bug Distribution Totals by Severity per Test Run
• Bug Distribution Totals by Test Item by Severity per Test Run

12. ‘Pass/Fail’ Criteria
Each Test Item will be assigned a Pass or Fail state dependant on two criteria:
•Total number and severity of Bugs in an Open & Unresolved state within Bugzilla/Bug Tracker.
•The level of successfully executed test requirements. The combination of both criteria will be used to recognize the Test Item can be declared Test Complete. However as this is a minimum level of quality that is believed achievable it’s recommended that where project timescales allow further testing and development should be conducted to raise the overall quality level.

Table of Issue Severity
Severity Definition Maximum Allowable
S1 Crash/Legal – System crash, data loss, no workaround, legal, Ship Killer 0
S2 Major – Operational error, wrong result
S3 Minor – Minor problems
S4 Incidental – Cosmetic problems
S5 N/A – Not Applicable; used for feature requests and Development Tasks Reference Only
The total MAXIMUM number of issues recorded in Bugzilla / Bug Tracker that can remain in an Open & Unresolved state for the Test Item and be acceptable for release.

Table of Test Scenario Priority
Test Scenario Definition Minimum Pass Rate
P1 – Critical Essential to the Product 100%
P2 – Important Necessary to the Product
P3 – Desirable Preferred, but not essential to the Product
The MINIMUM set of Test Scenarios that must pass before the Test Item can be considered for release.
Unforeseen issues arising during the Test Phase may impact the agreed ‘Pass/Fail’ Criteria for the Test Item. Issues can be managed through review with the QA Team and the project authorities.

13. Suspension Criteria & Resumption Requirements Testing of Test Items will be suspended if:
There are problems in Test Environment, show stopper detected in Build and pending Defect are more
1a) Suspension criteria:
A Severity 1 issue is logged and requires fixing before further testing can take place (a Blocking Issue)
1b) Resumption requirement:
The issue will need to be fixed before the Test Item is returned to QA for testing.

2a) Suspension criteria:
Significant differences exist between observed behavior of the Test Item and that shown in Test Scenario, Test Case or as expected from the previous version of the technology.
2b) Resumption requirement:
Development, QA and PM must come to a conclusion on resolving the issue and agreeing a definition of the expected behavior.

3a) Suspension criteria:
A Test Item sent for testing fails more than 20% of Developer Unit Tests.
3b) Resumption requirement:
The Test Item must be fixed or Unit Tests re-factored if out of date and then demonstrated to pass with <20% failure rate.

14. Test Deliverables
The following artifacts will be produced during the testing phase:

• Test Plan
Used to prescribe the scope, approach, resources, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with this plan.

• Test Schedule
This describes the tasks, time, sequence, duration and assigned staff.

• Test Breakdown
Which includes the Test Scenarios, their priority and related number of Test Cases along with the defined estimates for time to write and execute the Test Cases?

• Test Cases
Detail the pre-conditions, test steps and expected and actual outcome of the tests. There will be positive and negative test cases.

• Periodic progress and metric update reports
• Bug Reporting
• Test Summary Reports

15. Testing Tasks
The Testing Tasks that the QA Team will deliver cover the following scope:
•Fully In Scope: Functional and Regression Testing
•Partially in Scope: Cross Browser Compatibility, Integration in the Large.
•Out of Scope: Performance testing, Automated Regression, all forms of Non-Functional, Accessibility Compliance Testing, Security Testing, User Documentation Review.

16. Environmental and Infrastructure Needs
The following detail the environmental and infrastructure needs required for the testing of lastminute.com Test Items and execution of Regression Testing.
Hardware.
• Integration Environment:
• QA-A: http://.....
• QA-B: http://....
• Pre-live Staging:
Software
: http://...
: http://
Infrastructure
• Network connections are available on all Test Systems as required.
Test Repository
• http://...

17. Responsibility Matrix
The table below outlines the main responsibilities in brief for test activities:

Activity Product Manager Development Manager Test Manager Test Engineer
Provision of Technical Documents X X
Test Planning and Estimation X X
Review and Sign off Test Plan X X X
Testing Documentation X X
Test Preparation and Execution X
Test Environment Set-up X
Change Control of Test Environments X X
Provision of Unit Tested Test Items X
Bug fixes and return to QA for re-test X
Product Change Control X X X
Ongoing Test Reporting X X
Test Summary Reporting X

18. Staffing and Training Needs
Staffing
Staffing levels for the test activities will be:
•1 x Test Manager for the duration of test planning at 50% effort against plan.
•The required number of QA Engineers for the duration of test execution at 100% effort against plan.

Training
For each project the training needs will be assessed and defined in the Test Plan.

19. Schedules and Resource Plans
Team Plan

The QA Team will maintain a Team Plan which records individual assignment to testing tasks against assignable days. This will also record time planned and delivered against the tasks which will be used to update relevant Project Schedules and be used in periodic reporting.

Test Schedule
The Test Schedule for the Release will be located within at: http://

20. Risks and Contingencies

Risk Mitigation Strategy Impact
1 Delays in delivering completed Test Items from Development would impact test timescales and final Release quality Product Management and Development to advise of any delays and adjust Release Scope of Resources to allow the test activities to be performed. High
2 Delays in the turn around time for fixing critical bugs, which would require re-testing, could have an impact on the project dates. Strong management of bug resolution would be required from Development to ensure bugs are fixed and available for re-testing in the scheduled time. High
3 The QA, Development or PM teams require domain guidance from one or the other and they are not available. This would delay project activities. QA, Development and PM teams to ensure they are available at critical points or contactable during the project activities. Medium
4 Features of Test Items will not be testable. QA will record untested features and request the PM to assess business risk in support of the release of untested features. Low
5 Unexpected dependencies between Test Items and service components are encountered that require revision of Test Scenarios and related Test Cases. Information about dependencies is updated and communicated promptly to allow timely revision of Test Scenarios and Test Cases Low

21. Approvals
The following people are required to approve the Test Strategy

Approval By Approval
Test Manager
QA Department Manager
Product Owner
Development Manager
Project Manager

Tuesday, August 11, 2009

Testing School--Capability Maturity Model

Capability Maturity Model:

•Developed by the software community in 1986 with leadership from the SEI.
•Has become a de facto standard for assessing and improving processes related to software development
•Has evolved into a process maturity framework
•Provides guidance for measuring software process maturity
•Helps establish process improvement programs

The CMM is organized into five maturity levels:
–Initial
–Repeatable
–Defined
–Manageable
–Optimizing

•Except for Level 1 :
each maturity level decomposes into several key process areas that indicate the areas an organization should focus on to improve its software process.

Level 2 - Repeatable:
•Key practice areas –Requirements management –Software project planning –Software project tracking & oversight –Software subcontract management –Software quality assurance –Software configuration management

Level 3 - Defined:
•Key practice areas
–Organization process focus
–Organization process definition
–Training program
–Integrated software management
–Software product engineering
–Intergroup coordination
–Peer reviews

Level 4 - Manageable:
•Key practice areas
•Quantitative Process Management
•Software Quality Management

Level 5 - Optimizing:
Key practice areas
Defect prevention Technology change management
Process change management

Thursday, August 6, 2009

Testing School-An Automation Framework QTP

An Automation Framework primarily comprises of elements like:
A) Function Library
B) Object Repository
C) Database
D) Application Scenario Files
E) Initialization VB Script
F) Sequence File
G) Driver Script
H) Test Case List File
A) Function Library:
While creating automation framework the entire coding is done by using VB script functions that are user defined. We store these VB script functions in function library files with (*.vbs) extension. Apart from Driver Scripts, there is no need of doing any script creation beyond the function library.

Every application consists of following two types of function libraries

1) Containing common functions that are not dependent on the application functions.
2) Containing functions, which are specific, the application.

The script designer strives to utilize the common functions, which are independent of the application, to the maximum possible extent. However there is no hard & fast rule to always use common functions only. Whenever there is absolute necessity of using some functions, which are essential for the application, the same are created on the spot & are stored in the concerned function library.
"ExecuteScenarioFile" is a function present in our common function library & is called from the Driver Script or from outside the function library. This function is used for accessing the keywords, various objects and all other parameters from the Scenario File. This is also used to call other relevant functions as well from the function library.

The function library files with (*.vbs) extension are stored in the Function Library folder.
B) Object Repository:
Every application being automated contains a single / unique object repository file. The object repository files with (*.tsr) extension are stored in the Object Repository folder.

C) Database:
MS Access database module of MS Office is used to store all the test data. While designing the structure of the database the designer tries to keep one independent table for every screen of the application. Test Case ID field is usually designated as the primary key for every table. Test Case ID is unique for every record used by the program for iteration. Test Case ID identifier is quite helpful in navigating through some particular type of data stored in the table. QTP establishes a link with database by building its system DSN and using the same DSN in the script. As a best practice, every application should have an independent database file; however for the sake of conserving the space on the hard disc, multiple applications can share the same database.

The MS Access database files with (*.mdb) extension are stored in the Database folder.
D) Application Scenario Files:
This is a typical MS Excel spreadsheet file, which contains records with information on different keywords, objects & many other parameters representing a test scenario. QTP accesses the information from this application scenario file & performs the defined actions over the application. The MS Excel spreadsheet file serves the purpose of test case documentation as well & there remains no need to create another set of test case documentation. The application scenario files are the most versatile features of keyword driven framework.
The subject matter experts (SME’s) who usually don’t have much of programming knowledge are able to easily use these application scenario files for creating automated tests by making quick selection of keywords from the dropdown lists.

We can save tremendous amount of automation effort by creating maximum number of common which can be reused again & again thereby getting the maximum benefit.
The application scenario files with (*.xls) extension are stored in the application folder corresponding to the particular application.
E) Sequence File:
These are initialization settings files for the Driver QTP Script and are similar to the conventional configuration files. This is also a typical MS Excel spreadsheet file, which contains records with information like:

1) Name of Application Scenario file
2) Name of Object Repository
3) Name of Function Library
4) List of Test Cases to be executed
5) Details of Data Source
6) Details of script development work area
7) Details of script execution work area

Every application carries a "Run" or "Ignore" flag specifying the application scenario being executed. This sequence file in the form of spreadsheet is used by the Initialization VB Script uses & decides as to which application & which scenario is to be executed, for getting the information on the concerned Application Scenario file & for doing the initialization settings for the application.
The Sequence File with (*.xls) extension are stored directly in the root folder.
F) Initialization VB Script:
Initialization is the starting point for the execution of a script. QTP gets launched by Initialization VB Script & the work area is set either as development or production depending upon the inputs from the user. We can use the same Initialization VB script for doing the initialization settings for multiple web applications.
It carries out the following settings for the Driver Script.
1) Setting the Work Area according to user input.
2) Setting the application scenario file for the current run.
3) Setting the object repository.
4) Setting the function libraries.
5) Setting the data source.
6) Setting the test case list.
Once the above settings are completed, Initialization VB Script launches the Driver Script in read only mode, & makes it ready for execution by the user. Thereafter the Driver Script takes over the entire control.
The Initialization VB Script with (*.vbs) extension are stored directly under the root folder.
G) Driver Script:
These are test scripts of QTP meant for driving the script execution after Initialization VB Scripts complete their task. These driver scripts reside outside the function library.
These Driver Scripts call the "ExecuteScenarioFile" function for accessing the keywords, various objects and all other parameters from the Scenario File and making calls to suitable functions in the function library.
The Driver Scripts are stored directly under the root folder.
H) Test Case List File:
These files contain the list of all Test Case ID’s for execution under the present run. With the help of these files, the user gets an option for selecting only a subset of the data from the database for executing in the present run.
These Test Case List Files are stored directly under the root folder.

Tuesday, July 28, 2009

Testing School - Test cases for first name, last name , mailID, Landline number in registration form

Testing of First name and last name

Examples of field validation criteria are:

  • First and last name fields are required.
  • First and last name fields are limited to 50 characters each.
  • First and last name fields will not accept numbers.
  • First and last name fields will not accept the following characters: `~!@#$%^&*()_:";'{}[]+<>?,./

Consider the following example:

Registration Form

Enter your first and last names and click the Submit button.

First Name:

Last Name:

Submit

Here’s a set of test cases that include validation criteria:

1. Launch an IE browser and go to the Registration Form.

2. Verify the registration page opens.

3. On the registration page, click the mouse in the First Name field.

4. Leave the First Name and Last Name fields blank and click on the Submit button.

5. Verify an error message appears saying you cannot leave the First Name and Last Name fields blank.

6. Enter 50 characters in both the First and Last Name fields.

7. Verify the names are accepted.

8. Enter more than 50 characters in both the First and Last Name fields.

9. Verify an error message appears saying you cannot enter more than 50 characters in the First Name and Last Name fields.

10. Enter numbers in the First and Last Name fields.

11. Verify an error message appears saying you cannot enter numbers in the First and Last Name fields.

12. Enter the characters "`~!@#$%^&*()_:";'{}[]+<>?,./" in the First and Last Name fields.

13. Verify an error message appears saying you cannot enter "`~!@#$%^&*()_:";'{}[]+<>?,./" characters in the First and Last Name fields.

14. Type “Aman” in the First Name field.

15. Click the mouse in the Last Name field.

16. Type in “Kumar” in Last Name field.

17. Click on the Submit button.

18. Click on registration List in the left nav bar.

19. Verify the Name “Aman Kumar” is now present in the registration list.

For example, the blank name validation should be tested like this:

1. Leave the First Name and Last Name fields blank and click on the Submit button.

2. Verify an error message appears saying you cannot leave the First Name and Last Name fields blank.

3. Enter a valid First Name and leave the Last Name field blank and click on the Submit button.

4. Verify an error message appears saying you cannot leave the Last Name field blank.

5. Enter a valid Last Name and leave the First Name field blank and click on the Submit button.

6. Verify an error message appears saying you cannot leave the First Name field blank.

As you can see, this increases the number of test cases needed substantially, and even this set of test cases leaves certain issues untested.

For instance, the form should also be tested for resistance to what is known as HTML insertion attacks. If not handled properly, HTML code entered into a field will be displayed on the page when the submit button is clicked. Most code written now will handle insertion attacks, but it's always a good idea to verify that. Here's a test case for insertion attacks:

1. Enter the text "This is a test! " in the First Name field and hit submit.

2. Verify the text "This is a test!" does not appear on the registration page.

Testing of email Id:

Here is the list of valid and invalid email format which can
be used for testing
 
 
Valid Email address                                       Reason
 
email@domain.com                     Valid email
firstname.lastname@domain.com         Email contains dot in the address field
email@subdomain.domain.com             Email contains dot with   subdomain
firstname+lastname@domain.com          Plus sign is considered valid
                                      character
email@123.123.123.123                 Domain is valid IP address
email@[123.123.123.123]              Square bracket around IP address is considered valid
"email"@domain.com                 Quotes around email is considered  valid
1234567890@domain.com              Digits in address are valid
email@domain-one.com               Dash in domain name is valid
_______@domain.com                 Underscore in the address field is valid
email@domain.name                  .name is valid Top Level Domain name
email@domain.co.jp                 Dot in Top Level Domain name also
                                   considered valid (use co.jp as example here)
 
firstname-lastname@domain.com       Dash in address field is valid
 
 
 
 
 
Invalid Email address                                     Reason
 
plainaddress                           Missing @ sign and domain
#@%^%#$@#$@#.com                       Garbage
@domain.com                            Missing username
Joe Smith            Encoded html within email is
                                      invalid
email.domain.com                       Missing @
email@domain@domain.com                Two @ sign
.email@domain.com              Leading dot in address is not allowed
email.@domain.com              Trailing dot in address is not allowed
email..email@domain.com               Multiple dots
あいうえお@domain.com  Unicode char as address
email@domain.com (Joe Smith)        Text followed email is not allowed
email@domain              Missing top level domain (.com/.net/.org/etc)
email@-domain.com         Leading dash in front of domain is invalid
email@domain.web          .web is not a valid top level domain
email@111.222.333.44444                Invalid IP format
email@domain..com          Multiple dot in the domain portion is invalid

Testing of Landline phone in registration form:

In India, there are a number of valid formats for phone numbers that might be entered into a form:

nnnnnn                                234567
nn-nnnn-nnnnnn                        95-6543-234567              
nnnnn-nnnnnn                          06543-234567
nn-nnnn-nnnnnn                        91-6543-234567

There are many other variations of this, as well, but those examples should be enough to give you an idea of what we're dealing with here. Because there are so many valid formats, you're going to have to set some rules for reformatting these phone numbers:

1. All numbers will at least be in the format nn-nnnn-nnnnnn. Numbers entered without area codes will have a default area code prepended to them.

2. Any extensions will be listed as nnnnnn following the phone number.

3. If an initial 91 is added to the number, it will be removed, since users already know to dial a one before making a long distance call.

Thursday, July 23, 2009

Testing School-How to Build the Plan

How to Build the Plan

1. Analyze the product.

  • What to Analyze
    • Users (who they are and what they do)
    • Operations (what it’s used for)
    • Product Structure (code, files, etc.)
    • Product Functions (what it does)
    • Product Data (input, output, states, etc.)
    • Platforms (external hardware and software)
  • Ways to Analyze
    • Perform product/prototype walkthrough.
    • Review product and project documentation.
    • Interview designers and users.
    • Compare w/similar products.
  • Possible Work Products
    • Product coverage outline
    • Annotated specifications
    • Product Issue list
  • Status Check
    • Do designers approve of the product coverage outline?
    • Do designers think you understand the product?
    • Can you visualize the product and predict behavior?
    • Are you able to produce test data (input and results)?
    • Can you configure and operate the product?
    • Do you understand how the product will be used?
    • Are you aware of gaps or inconsistencies in the design?
    • Do you have remaining questions regarding the product?

2. Analyze product risk.

  • What to Analyze
    • Threats
    • Product vulnerabilities
    • Failure modes
    • Victim impact
  • Ways to Analyze
    • Review requirements and specifications.
    • Review problem occurrences.
    • Interview designers and users.
    • Review product against risk heuristics and quality criteria categories.
    • Identify general fault/failure patterns.
  • Possible Work Products
    • Component risk matrices
    • Failure mode outline
  • Status Check
    • Do the designers and users concur with the risk analysis?
    • Will you be able to detect all significant kinds of problems, should they occur during testing?
    • Do you know where to focus testing effort for maximum effectiveness?
    • Can the designers do anything to make important problems easier to detect, or less likely to occur?
    • How will you discover if your risk analysis is accurate?

3. Design test strategies.

  • General Strategies
    • Domain testing (including boundaries)
    • User testing
    • Stress testing
    • Regression testing
    • Sequence testing
    • State testing
    • Specification-based testing
    • Structural testing (e.g. unit testing)

  • Ways to Plan
    • Match strategies to risks and product areas.
    • Visualize specific and practical strategies.
    • Look for automation opportunities.
    • Prototype test probes and harnesses.
    • Don’t over plan. Let testers use their brains.
  • Possible Work Products
    • Itemized statement of each test strategy chosen and how it will be applied.
    • Risk/task matrix.
    • List of issues or challenges inherent in the chosen strategies.
    • Advisory of poorly covered parts of the product.
    • Test cases (if required)
  • Status Check
    • Do designers concur with the test strategy?
    • Has the strategy made use of every available resource and helper?
    • Is the test strategy too generic could it just as easily apply to any product?
    • Will the strategy reveal all important problems?

4. Plan logistics.

  • Logistical Areas
    • Test effort estimation and scheduling
    • Testability engineering
    • Test team staffing (right skills)
    • Tester training and supervision
    • Tester task assignments
    • Product information gathering and management
    • Project meetings, communication, and coordination
    • Relations with all other project functions, including development
    • Test platform acquisition and configuration
  • Possible Work Products
    • Issues list
    • Project risk analysis
    • Responsibility matrix
    • Test schedule
    • Agreements and protocols
    • Test tools and automation
    • Stubbing and simulation needs
    • Test suite management and maintenance
    • Build and transmittal protocol
    • Test cycle administration
    • Problem reporting system and protocol
    • Test status reporting protocol
    • Code freeze and incremental testing
    • Pressure management in end game
    • Sign-off protocol
    • Evaluation of test effectiveness
  • Status Check
    • Do the logistics of the project support the test strategy?
    • Are there any problems that block testing?
    • Are the logistics and strategy adaptable in the face of foreseeable problems?
    • Can you start testing now and sort out the rest of the issues later?

5. Share the plan.

  • Ways to Share
    • Engage designers and stakeholders in the test planning process.
    • Actively solicit opinions about the test plan.
    • Do everything possible to help the developers succeed.
    • Help the developers understand how what they do impacts testing.
    • Talk to technical writers and technical support people about sharing quality information.
    • Get designers and developers to review and approve all reference materials.
    • Record and reinforce agreements.
    • Get people to review the plan in pieces.
    • Improve review ability by minimizing unnecessary text in test plan documents.
  • Goals
    • Common understanding of the test process.
    • Common commitment to the test process.
    • Reasonable participation in the test process.
    • Management has reasonable expectations about the test process.
  • Status Check
    • Is the project team paying attention to the test plan?
    • Does the project team, especially first line management, understand the role of the test team?
    • Does the project team feel that the test team has the best interests of the project at heart?
    • Is there an adversarial or constructive relationship between the test team and the rest of the project?
    • Does any member of the project team feel that the testers are “off on a tangent” rather than focused on important testing tasks?