Thursday, February 25, 2010
Testing School--Testing Via Equivalence Partitioning
Equivalence partitioning is the process of defining the optimum number of tests by:
· reviewing documents such as the Functional Design Specification and Detailed Design Specification, and identifying each input condition within a function,
· selecting input data that is representative of all other data that would likely invoke the same process for that particular condition.
Defining Tests
A number of items must be considered when determining the tests using the equivalence partitioning method, including:
· All valid input data for a given condition are likely to go through the same process.
· Invalid data can go through various processes and need to be evaluated more carefully. For example,
· a blank entry may be treated differently than an incorrect entry,
· a value that is less than a range of values may be treated differently than a value that is greater,
· if there is more than one error condition within a particular function, one error may override the other, which means the subordinate error does not get tested unless the other value is valid.
Defining Test Cases
Create test cases that incorporate each of the tests. For valid input, include as many tests as possible in one test case. For invalid input, include only one test in a test case in order to isolate the error. Only the invalid input test condition needs to be evaluated in such tests, because the valid condition has already been tested.
EXAMPLE OF EQUIVALENCE PARTITIONING
Conditions to be Tested
The following input conditions will be tested:
· For the first three digits of all social insurance (security) numbers, the minimum number is 111 and the maximum number is 222.
· For the fourth and fifth digits of all social insurance (security) numbers, the minimum number is 11 and the maximum number is 99.
Defining Tests
Identify the input conditions and uniquely identify each test, keeping in mind the items to consider when defining tests for valid and invalid data.
The tests for these conditions are:
· The first three digits of the social insurance (security) number are:
1. = or > 111 and = or < 222, (valid input),
2. < 111, (invalid input, below the range),
3. > 222, (invalid input, above the range),
4. blank, (invalid input, below the range, but may be treated differently).
· The fourth and fifth digits of the social insurance (security) number are:
5. = or > 11 and = or < 99, (valid input),
6. < 11, (invalid input, below the range),
7. > 99, (invalid input, above the range),
8. blank, (invalid input, below the range, but may be treated differently).
Using equivalence partitioning, only one value that represents each of the eight equivalence classes needs to be tested.
Defining Test Cases
After identifying the tests, create test cases to test each equivalence class, (i.e., tests 1 through 8).
Create one test case for the valid input conditions, (i.e., tests 1 and 5), because the two conditions will not affect each other.
Identify separate test cases for each invalid input, (i.e., tests 2 through 4 and tests 6 through 8). Both conditions specified, (i.e., condition 1 - first three digits, condition 2 - fourth and fifth digits), apply to the social insurance (security) number. Since equivalence partitioning is a type of black-box testing, the tester does not look at the code and, therefore, the manner in which the programmer has coded the error handling for the social insurance (security) number is not known. Separate tests are used for each invalid input, to avoid masking the result in the event one error takes priority over another. For example, if only one error message is displayed at one time, and the error message for the first three digits takes priority, then testing invalid inputs for the first three digits and the fourth and fifth digits together, does not result in an error message for the fourth and fifth digits. In tests B through G, only the results for the invalid input need to be evaluated, because the valid input was tested in test case A.
Suggested test cases:
· Test Case A - Tests 1 and 5, (both are valid, therefore there is no problem with errors),
· Test Case B - Tests 2 and 5, (only the first one is invalid, therefore the correct error should be produced),
· Test Case C - Tests 3 and 5, (only the first one is invalid, therefore the correct error should be produced),
· Test Case D - Tests 4 and 5, (only the first one is invalid, therefore the correct error should be produced),
· Test Case E - Tests 1 and 6, (only the second one is invalid, therefore the correct error should be produced),
· Test Case F - Tests 1 and 7, (only the second one is invalid, therefore the correct error should be produced),
· Test Case G - Tests 1 and 8, (only the second one is invalid, therefore the correct error should be produced).
Other Types of Equivalence Classes
The process of equivalence partitioning also applies to testing of values other than numbers. Consider the following types of equivalence classes:
· a valid group versus an invalid group, (e.g., names of employees versus names of individuals who are not employees),
· a valid response to a prompt versus an invalid response, (e.g., Y versus N and all non-Y responses),
· a valid response within a time frame versus an invalid response outside of the acceptable time frame, (e.g., a date within a specified range versus a date less than the range and a date greater than the range).
Monday, February 22, 2010
Testing School---Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions.
It is a test case design technique that is performed once requirements have been reviewed for ambiguity, followed by a review for content.
Requirements are reviewed for content to insure that they are correct and complete. The Cause-Effect Graphing technique derives the minimum number of test cases to cover 100% of the functional requirements to improve the quality of test coverage.
There are four steps:
1.Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2.A cause-effect graph is developed.
3.The graph is converted to a decision table.
4.Decision table rules are converted to test cases.
The Cause-Effect Graphing technique was invented by Bill Elmendorf of IBM in 1973. Instead of the test case designer trying to manually determine the right set of test cases, he/she models the problem using a cause-effect graph, and the software that supports the technique, BenderRBT, calculates the right set of test cases to cover 100% of the functionality. The cause-effect graphing technique uses the same algorithms that are used in hardware logic circuit testing. Test case design in hardware insures virtually defect free hardware.
Cause-Effect Graphing also has the ability to detect defects that cancel each other out, and the ability to detect defects hidden by other things going right. These are advanced topics that won’t be discussed in this article.
The starting point for the Cause-Effect Graph is the requirements document. The requirements describe “what” the system is intended to do. The requirements can describe real time systems, events, data driven systems, state transition diagrams, object oriented systems, graphical user interface standards, etc. Any type of logic can be modeled using a Cause-Effect diagram. Each cause (or input) in the requirements is expressed in the cause-effect graph as a condition, which is either true or false. Each effect (or output) is expressed as a condition, which is either true or false
Testing School--Boundary value analysis and Equivalence partitioning
Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.
Equivalence Partitioning:
In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.
In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.
So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.
2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.
3) Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.
We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.
Equivalence partitioning uses fewest test cases to cover maximum requirements.
Boundary value analysis:
It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.
Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.
Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.
Boundary value analysis is often called as a part of stress and negative testing.
Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.
E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes.
This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.
Friday, February 19, 2010
TESTING SCHOOL--Why I LOVE TESTING
I love it when I'm the first person to touch new software and root around for errors.
I like to teach and mentor people that will someday be better and more successful than I am.
I like to hang out with people that do what I do and argue amicably about technique and methodologies.
I like learning about other environments and companies and how people in our field operate in those environments.
I like reading and learning from people who either think or present things differently than I do. Overall, I think I actually enjoy having to struggle just a bit to keep up, whether it's with my own workload or the ideas/work of someone else.
No two pieces of work are the same. It's intellectually challenging and mentally stimulating. It keeps the grey matter ticking over and that makes me happy.
It's given me the opportunity to meet and work with many different people/cultures - that's enriching in itself.
The SW testing profession is developing and evolving. Nothing stands still - it's a perfect time to be involved to help shape and influence.
Constant learning opportunities - and that's addictive!
As a child when you're given something you are told to play with it but 'be careful, don't break it'
As an adult we buy things and we 'be careful, don't break it'
As a tester WE GET AND PAID TO FIND THE BREAKS.
The thrill of finding a good bug.
Asking the questions that make everybody else sit back and muttering quietly under their breath '$hit, didn't even think of that'. The kind of questions that get project items pushed back because not everything has been taken into consideration.
Sounds kind of corny but in our own way, we help make the world a better place. We help make software more stable, easier to use, crash less, etc. Hopefully helping to make other peoples lives less stressful.
It is great to lead younger test professionals and direct and guide them into how to test better, what to test and so on... In the process they have ideas, thoughts, arguments similar or contrary to mine which stimulate me to learn and unlearn.
The challenge of doing more in less which testing is all about at most times. The challenge in motivating people, goading them, convincing them.... to achieve, despite heavy odds stacked against them.
I love testing because of the thrill it gives me when the client praises the testing team for their knowledge, for finding the bugs and making the user's life better and easier... And when this happens the joy and warmth I derive from the hand-shake with the testers who won the battle pleases me no end.
As I make my way back home and sometimes while the city sleeps, the looking-forward to the next day at work and the plans that begin to form in my mind is a sure sign that I am happy and content with what I do - managing a testing team that will do wonders and achieve!
Wednesday, February 10, 2010
Testing School---- Testing e-Commerce Website
The main Things that should be taken care of in the e-commerce website are:
The document lists the criteria according to which the products will be initially evaluated.
E-Commerce Functional Requirements
o Web Site Look & Feel
o Category Management
o Product Management
o Inventory management
o Coupons/Discounts/Gift Certificates
o Discount
o Shopping Cart
o Customer Management
o Order Processing
o General
o Support
o Marketing
o Administration
Non-Functional requirements
o System Management facilities (Manageability)
o Interface
o Interoperability
o Reliability
o Security
o Documentation
o Package
o Testability
Shopping Cart
Add/ update / delete products
Modify quantities
Modify product options (size, color,..etc)
Calculate totals
Calculate total weight
Select country
Calculate shipping cost
Accept promotion and gift codes
Calculate Bundled products promotions
Sales tax / VAT calculations
You can require a minimum order amount
Set a maximum order amount
Order Processing
Accept credit cards, & different payment methods
Accept Cash on delivery
Offline payment support to process credit cards manually
Multiple currency support
One Page checkout
Receive text message alerts when orders are placed by your customers
Ability to have line item order level management to display how much of each line item in an order is shipped or back ordered.
Batch Import Tracking Numbers
Change the status of your orders in batches
Auto-calculation of taxes
Customers choose shipping option
Email receipts sent to customer and administrator
Order saved in admin area for viewing
Each order is saved with a unique order number
Each order is saved in a certain order status based on the payment method
Batch order printing
Automatic store email receipts
Automatic shipping email receipts
View and process your orders online
Add to cart to see sales price feature
Monday, February 1, 2010
Testing School-------Web Site Testing Checklist
General
❒ Are all tests being run on "clean" machines?
❒ Does the system do what it is intended to do?
❒ Does it provide the correct results?
❒ Does the system provide all the functions and features expected?
❒ Can the typical user run the system without aggravation?
❒ Is it easy to learn and use?
❒ Does the system provide or facilitate customer service? i.e. responsive, helpful, accurate?
❒ Can the accuracy and trustworthiness of the system easily be confirmed?
❒ Can the system easily be modified or fixed?
❒ Are the developers able to deliver or modify the system within the timeframe when it is
needed?
❒ Do existing features, which have not been changed, perform the same way they did in earlier
versions?
❒ Does the system make efficient use of hardware, network, and human resources?
❒ Does the system comply with the relevant technical standards?
❒ Does the system comply with the appropriate regulatory requirements?
❒ Can the system be validated to prove it works in an acceptable manner?
❒ Can some of the components be re-used in other systems?
❒ Can the system be quickly and easily installed on a variety of platforms by a variety of users?
❒ Are there planned future upgrade paths as the use of the system grows?
❒ Is information archived and easily retrievable?
❒ Is the Web site searchable?
Usability, Interface and Navigation
❒ Can the system work effectively for one user, ten users or a thousand?
❒ Does the home page load quickly?
❒ Are the instructions on how to use the site clear to the user?
❒ If you follow each instruction does the expected result occur?
❒ Is all terminology understandable for all of the site's intended users?
❒ Is a navigational bar present on every screen?
❒ Is the navigation bar consistently located?
❒ Can a user navigate using text only?
❒ Can a user navigate without the use of a mouse?
❒ Can your site be used by the visually impaired? Red/Green Color-Blind, less than 20/20
vision, etc.
❒ Does tabbing work consistently, in a uniform manner?
❒ Is there a link to home on every single page?
❒ Is page layout consistent from page to page?
❒ Is each page organized in an intuitive manner?
❒ Are graphics used consistently?
❒ Are graphics optimized for quick downloads?
❒ Do all the images add value to each page, or do they simply waste bandwidth?
❒ Are graphics being used the most efficient use of file size?
❒ Does text wrap properly around pictures/graphics?
❒ Are all referenced web sites or email addresses hyperlinked?
❒ Are hyperlink colors standard?
❒ Does the site look good on 640 x 480, 600x800 etc.?
❒ Are fonts too small to read (remember not everyone may have the same vision as you)?
❒ Are fonts too large?
❒ Is all text properly aligned?
❒ Are all graphics properly aligned?
❒ Do pages print legibly without cutting off text?
❒ Does the site have a site map?
❒ Does each hyperlink on the map actually exist?
❒ Are there hyperlinks on the site that are not represented on the map?
❒ Does each hyperlink work on each page?
❒ Is content legally correct (i.e. not filler content placed on site by developers during unit
testing)?
❒ Is the page background (color) distraction free?
❒ Does the Back button work as intended? It should not open a new browser window, redirect
you to another site, prevent caching such that the Back navigation requires a fresh trip to the
server; all hypertext navigation should be sub-second and this goes double for backtracking
❒ Does content remain if you need to go back to a previous page, or if you move forward to
another new page?
❒ Can you get to your desired location with 3 or less clicks from the Home Page?
❒ Are all of the parts of a table or form present? Correctly laid out? Can you confirm that
selected texts are in the "right place?
❒ Are all of the links on a page the same as they were before? Are there new or missing links?
Are there any broken links?
❒ Does a link bring you to the page it said it would?
❒ Does the page you are linking to exist?
❒ Is contact information for the site owner readily visible and available (name, telephone
number, email address, mailing address, fax number)?
❒ If a user wishes to bookmark a page, is the page name easily understandable?
❒ Does your site's Web address appear in the History list if the user allows for historical page
recording?
❒ Does the status bar on each Web page accurately reflect the progress of page loading,
information, etc.?
Tables
❒ Does the user constantly have to scroll to the right to see items in a table?
❒ Do tables print out properly?
❒ Are the columns wide enough or does every row have to wrap around?
❒ Are certain rows excessively high because of one entry?
.
Frames
❒ Does your Web site handle browsers that do not support frames?
❒ Do frames resize automatically and appropriately? Is the user able to manipulate frame size?
❒ Does a scrollbar appear if required?
❒ On framed pages have you verified that what is actually recognized by the Bookmark or
Favorites is appropriate?
❒ Can a search engine find content within the frames?
❒ Do the frame borders look right?
❒ Are there any issues related to refreshing within frames?
Data Verification
❒ Is the site's intended use of data clearly depicted to the user?
❒ Is the Privacy Policy clearly defined and available for user access?
❒ Is the accuracy of stored data sustained?
❒ Has data been verified at the workstation?
❒ Has data been verified at the server?
❒ Have you ensured that what the user is entering on the workstation is yielding the right
information on the server?
❒ Are you prevented from entering the same information multiple times (order forms, free
samples, etc.)?
❒ Is a unique identifier assigned to each user entering form data?
❒ Is data that is requested of the user essential to the process for which it is requested? For
example do you need a user's date of birth in order to process his book order or are you
simply asking for too much user information?
❒ Can text be entered in numeric fields?
❒ Can wildcards be used in searches?
❒ Can spaces and blank values be entered in fields?
❒ Are long strings accepted?
❒ Do fields allow for the maximum amount of text to be entered?
❒ Are the initial values of checkboxes and radio buttons correct?
❒ Are you restricted to only selecting one radio button in a group at one time?
❒ Do check boxes trigger the desired event?
❒ Are users prevented from entering HTML code in form fields?
❒ Is intelligent error handling built into your data verification? IE. If Date of Birth is a required
field MM/DD/YYYY, it is unlikely that the person entering the data was born in 1857.
External Interfaces
❒ Does the system interface correctly with related external systems?
❒ Have all possible interfaces been identified?
❒ Have all supported browsers been tested?
❒ Have all error conditions related to external interfaces been tested when external application
is unavailable or server inaccessible?
❒ Has proxy caching been tested?
❒ Have all external applications that may be launched from within the Web site been tested?
Internal Interfaces
❒ Can the Web site support users who can not perform downloads?
❒ Can the Web site work with firewalls?
❒ If the site uses plug-ins, can the site still be used without them?
❒ Can the site support all plug-ins that are needed for the Web site at various modem and PC
speeds?
❒ Will all versions of plug-ins work together?
❒ Can all linked documents be supported/opened on all platforms (i.e. can Microsoft Word be
opened on Solaris)?
❒ Do all plug-ins work with all Browsers?
❒ Does the site lose usability, if Java is not enabled?
❒ Do all plug-ins load properly?
❒ Are failures handled if there are errors in download?
❒ Does the site function with the use of "non-standard" hardware (speakers, cable modems,
etc.)
❒ Can you Download Signed ActiveX Controls?
❒ Can you Download Unsigned ActiveX Controls?
❒ Can you initialize and script ActiveX controls not marked as safe?
❒ Can you Run ActiveX controls and plug-ins?
❒ Can you Script ActiveX controls marked safe for scripting?
❒ Does your solution require cookies?
❒ Does your solution work even if users disable cookies?
❒ Does your solution allow per-session cookies?
❒ Does your solution require file downloads?
❒ What if a user does not want to download files, can the site still be used?
❒ Does your solution require special fonts?
❒ Does your solution require users to access data sources across multiple sites/domains?
❒ Can users apply drag and drop functionality?
❒ Can users use copy/paste functionality?
❒ Does your solution require the installation of any desktop items?
❒ Does your solution require the launching or installation of any files that require frames?
❒ Are you able to submit unencrypted form data?
❒ Does the site allow paste operations via scripts?
Browsers - IE, Netscape, AOL, Mac, etc.
❒ Is the HTML version being used compatible with appropriate browser versions?
❒ Is Java Code/Scripts usable by the browsers under test?
❒ Do images display correctly with browsers under test?
❒ Have you verified that fonts are usable on any of the browsers?
❒ Have you checked the Security Settings/Risks as they relate to each browser?
❒ Have you verified digital certificates across multiple browsers?
❒ Have you verified that plug-ins work with the browsers you are testing with your site?
❒ Have you safeguarded against viewing source code?
❒ Have you printed your site's content from various browsers?
❒ Impact of Content Size on Infrastructure (reliability, consistency)
❒ Have you verified Applets to Frames Compatibility?
❒ Human Engineering - color codes - visual presentation
❒ Have you tested Mouse vs. Key Strokes within various browsers?
❒ Have you implemented intelligent error handling (from disabling cookies, etc.)?
❒ Have you verified the use of 128-bit Encryption?
❒ Have you tested Animated GIFs across browsers?
Cookies
❒ Has information stored in cookies been verified?
❒ Is cookie information encrypted?
❒ Is cookie information being incremented properly?
❒ Have you prevented cookies from being editable by the user?
❒ Have you checked to see what happens if a user deletes cookies while in site?
❒ Have you checked to see what happens if a user deletes cookies after visiting a site?
❒ Are cookies being stored in the proper directory?
❒ Is cookie information correct and valid for the user accessing the site?
Load/Concurrent Usage
❒ Does the system meet its goals for response time, throughput, and availability?
❒ Is the system able to handle extreme or stressful loads?
❒ Is the system able to continue operating correctly over time without failure?
❒ Does the system operate in the same way across different computer and network
configurations, platforms and environments, with different mixes of other applications?
❒ Have you monitored CPU usage, response time, disk space, memory utilization and leaks?
❒ Have you defined standards for response time (i.e. all screens should paint within 10
seconds)?
❒ Have you verified Firewall, Certificate, Service Provider and Customer Network impact?
❒ Is page loading performance acceptable over modems of different speeds?
❒ Can the site sustain long periods of continuous usage by 1 user?
❒ Can the site sustain long periods of usage by multiple users?
❒ Can the site sustain short periods of usage at high volume?
❒ Can the site sustain large transactions without crashing?
❒ Will the site allow for large orders without locking out inventory if the transaction is invalid?
Error Handling
❒ Are automatic error detection and recovery mechanisms built in to try to keep the system
operating no matter what?
❒ If the system does crash, are the re-start and recovery mechanisms efficient and reliable?
❒ If you leave the site in the middle of a task does it cancel?
❒ If you lose your Internet connection does the transaction cancel?
❒ Does your solution handle interruptions in file transfer?
❒ Does your solution handle browser crashes?
❒ Does your solution handle network failures between Web site and application servers?
❒ Does your solution handle the database server becoming inaccessible?
❒ Does the application notify the user of transaction status?
❒ Does the site include 24 x 7 monitoring of performance?
❒ Email protocol/limitations of monitoring software - MAPI
❒ Does your site include Connectivity to a Paging System?
❒ Timing - continual, hourly, daily, weekly
❒ Hardware limitations - does the monitoring software have to run on a dedicated
machine?
❒ Memory - leaks, cache, issues of resulting from continual running
Network Impacts
❒ Have you considered 32-bit vs. 64-bit versions of IP?
❒ Have you tested the impact of Secure Proxy Server?
Security
❒ Is security adequate?
❒ Is confidentiality/user privacy protected?
❒ Is access only successful with 128 bit browsers?
❒ Does the site prompt for user name and password?
❒ Does site ask for personal information of children? If so, is it acquired through secure pages
with warning information for parents?
❒ Are there Digital Certificates, both at server and client?
❒ Have you verified where encryption begins and ends?
❒ Are concurrent log-ons permitted?
❒ Does the application include time-outs due to inactivity?
❒ Is bookmarking disabled on secure pages?
❒ Does the key/lock display on status bar for insecure/secure pages?
❒ Is Right Click, View, Source disabled?
❒ Are you prevented from doing direct searches by editing content in the URL?
❒ If using Digital Certificates, test the browser Cache by enrolling for the Certificate and
completing all of the required security information. After completing the application and
installation of the certificate, try using the <-- BackSpace key to see if that security
information is still residing in Cache. If it is, then any user could walk up to the PC and
access highly sensitive Digital Certificate security information.
❒ Is there an alternative way to access secure pages for browsers under version 3.0, since SSL
is not compatible with those browsers?
❒ Do your users know when they are entering or leaving secure portions of your site?
❒ Does your server lock out an individual who has tried to access your site multiple times with
invalid login/password information?