Thursday, May 20, 2010
Testing School----Types of Error
User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, inappropriate use of key board.
Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.
Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.
Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.
Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.
Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.
Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.
Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.
Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.
Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.
Wednesday, April 28, 2010
testing School---Key Word Driven Testing
Key Word Driven Testing:
The Keyword Driven framework consists of the basic components given below
1. Control File
2. Test Case File
3. Startup Script
4. Driver Script
5. Utility Script
1. Control File
a)Consists details of all the Test scenarios to be automated
b)User will be able to select a specific scenario to execute based on turning on or off a flag in the Control File
c) Control File is in the form of an excel worksheet and contains columns for Scenario ID,Execute (Y/N),Object Repository Path, Test Case File Path
2. Test Case File
a)Contains the detailed steps to be carried out for the execution of a test case
b) It is also in the form of an excel sheet and contains columns for Keyword, Object Name, Parameter
3. Startup Script
a) The Starup script is utilised for the initialization and reads the control files
b) It then calls the driver script to execute all the scenarios marked for execution in the control file
4. Driver Script
a) It Reads the Test Case files. Checks the keywords and calls the appropriate utility script functions based on specific keyword
b) Error Handling is taken care of in the driver script.
5. Utility Scripts
a) Perform generic tasks that can be used across applications. It should not be application dependent.
Advantage of Framework.
- The main advantage of this framework is the low cost for maintenace. If there is change to any test case then only the Test Case File needs to be updated and the Driver Script and Startup script will remain the same.
- No need to update the scripts in case of changes to the application.
Monday, April 26, 2010
Testing School---Troubleshooting- Before Reporting any bug?
Troubleshooting- Before Reporting any bug?
Troubleshooting of:
· What’s not working?
· Why it’s not working?
· How can you make it work?
· What are the possible reasons for the failure?
Answer for the first question “what’s not working?” is sufficient for you to report the bug steps in bug tracking system. Then why to answer remaining three questions? Think beyond your responsibilities. Act smarter, don’t be a dumb person who only follow his routine steps and don’t even think outside of that. You should be able to suggest all possible solutions to resolve the bug and efficiency as well as drawbacks of each solution. This will increase your respect in your team and will also reduce the possibility of getting your bugs rejected, not due to this respect but due to your troubleshooting skill.
Before reporting any bug, make sure it isn’t your mistake while testing, you have missed any important flag to set or you might have not configured your test setup properly.
Troubleshoot the reasons for the failure in application. On proper troubleshooting report the bug. I have complied a troubleshooting list. Check it out – what can be different reasons for failure.
Reasons of failure:
1) If you are using any configuration file for testing your application then make sure this file is up to date as per the application requirements: Many times some global configuration file is used to pick or set some application flags. Failure to maintain this file as per your software requirements will lead to malfunctioning of your application under test. You can’t report it as bug.
2) Check if your database is proper: Missing table is main reason that your application will not work properly.
I have a classic example for this: One of my projects was querying many monthly user database tables for showing the user reports. First table existence was checked in master table (This table was maintaining only monthly table names) and then data was queried from different individual monthly tables. Many testers were selecting big date range to see the user reports. But many times it was crashing the application as those tables were not present in database of test machine server, giving SQL query error and they were reporting it as bug which subsequently was getting marked as invalid by developers.
3) If you are working on automation testing project then debug your script twice before coming to conclusion that the application failure is a bug.
4) Check if you are not using invalid access credentials for authentication.
5) Check if software versions are compatible.
6) Check if there is any other hardware issue that is not related to your application.
7) Make sure your application hardware and software prerequisites are correct.
8) Check if all software components are installed properly on your test machine. Check whether registry entries are valid.
9) For any failure look into ‘system event viewer’ for details. You can trace out many failure reasons from system event log file.
10) Before starting to test make sure you have uploaded all latest version files to your test environment.
Thursday, April 22, 2010
Testing School---BVT (Build Verification Testing)
What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called smoke testing or build acceptance testing (BAT)
New Build is checked mainly for two things:
·Build validation
·Build acceptance
Some BVT basics:
·It is a subset of tests that verify main functionalities.
·The Bat’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
· The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
·Design BVTs carefully enough to cover basic functionality.
·Typically BVT should not run more than 30 minutes.
·BVT is a type of regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.
What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether – all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.
These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include test cases in your BVT automation suite:
· Include only critical test cases in BVT.
· All test cases included in BVT should be stable.
· All the test cases should have known expected result.
· Make sure all included critical functionality test cases are sufficient for application test coverage.
Also do not include modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.
Example: Test cases to be included in BVT for Text editor application (Some sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test cases for copy, cut, paste functionality of text editor
4) Test cases for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.
What happens when BVT suite run:
Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.
This process gets repeated for every new build.
Why BVT or build fails?
BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.
Tips for BVT success:
1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT result – automate everything.
5) Have some penalties for breaking the build some chocolates or team coffee party from developer who breaks the build will do.
Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, and resources and after all no frustration of test team for incomplete build.
Developers might be doing the unit and integration testing and not necessarily the BVT. BVT is most of the times done by test engineer. Once the build team deploys the test build on the test environments it’s the job of test engineer to perform BVT (sniff, sanity smoke etc…)
If you are able to test the application and are executing the test cases, it’s natural that the BVT has been passed. Else you wouldn’t have been able to test the application… It doesn’t matter if the testing is done manually or is automated using any tool. If the build has got a new feature how can you automate it?
Following is the process that you might follow:
Developer initiates a mail to build team (also marked to test team with the description of what to be tested in the new build) to make the build >> Build team makes the build and deploys it on test machines and replies all asking the test team to continue testing, else if build fails he replies so in the mail >> If the BVT fails tester replies to mail stating that the BVT failed along with the logs whatever are available, else continues testing.
Monday, April 19, 2010
Testing School---Some Tips about Testing
Importance of Software Testing
In Internet, We can see lot of articles explaining/listing loss made by poor low-quality software products.
For example, how will you feel if a bug in a bank software shows your bank balance as 0 instead of some thousands?
And if you are a student, what will you be your state if your marksheet shows your score as 0 instead of some good score?
Here, we will be feeling good if we see some notification (e.g Not able to show your balance due to some unexpected error/Couldn't print your marksheet because of unexpected issue)
instead of seeing wrong data.
Testing plays an important role to avoid these situations.
So we can say that testing is necessary/important even when it couldn't guarantee 100% error free software application.
And also,
- Cost of fixing the bug will be more if it is found in later stage than it is found earlier.
- Quality can be ensured by testing only. In the competitive market,only Quality product can exist for long time.
Testing will be necessary even if it is not possible to do 100% testing for an application.
One more important reason for doing testing is user/production environment will be completely different from development environment.
For example, a webpage developer may be using FireFox as browser for doing his webpage development. But the user may be using different browser such as Internet Explorer, Safari, Chrome and Opera.
The web page appearing good in FireFox may not appear good in other browsers (particularly IE). So ultimately, user will not be happy even if the developer puts more efforts to develop the webpage. As we know that Users satisfaction is more important for growth of any business, testing becomes more important.
So we can assume/treat the Testers as the representatives of the Users.
Basics of Quality Assurance (QA) in Sofware Development
Quality Assurance is the most important factor in any business or industry.
Same thing is applicable for Software development also.
Spending some additional money for getting high quality product will definitely give more profit.
But anyway, it is not true that expensive products are high-quality products. Even inexpensive product can be high-quality product if it meets Customer’s needs/expectation.
The quality assurance cycle consists of four steps: Plan, Do, Check, and Act. These steps are commonly abbreviated as PDCA.
The four quality assurance steps within the PDCA model are
Plan: Establish objectives and processes required to deliver the desired results.
Do: Implement the process developed.
Check: Monitor and evaluate the implemented process by testing the results against the predetermined objectives
Act: Apply actions necessary for improvement if the results require changes.
For getting appropriate quality output in software development we need to follow SQA (Software Quality Assurance) process in each phase (Planning, Requirement Analysis, Design, Development, Integration & Test, Implementation and Maintenance) of the software development lifecycle.
We should follow below solutions to avoid many software development problems.
Solid requirements - Clear, complete, attainable, detailed and testable requirements that are agreed by all players (Customer,developers and Testers).
Realistic schedules - Allocate enough time for planning, design,testing, bug fixing, re-testing and documentation.
Adequate testing - Start testing early, re-test after fixes/changes.
Avoid unnecessary changes in initial requirements once after starting the coding
Require walk-through and inspections
Writing Good Test Cases and Finding Bugs effectively
To develop bug free software application, writing good test cases is essential.
Here, we will see how to write good test cases.
Before seeing this, we should understand what is Good Test Case.
There won't be any solid definition for "Good Test Case".
I will consider a Test Case as "Good" only when a Tester feels happy to follow the steps in the Test Case which is written by another Tester.
Because, Test Cases will be useful only when it is used by the people.
If a test case is poorly written with excessive unwanted steps, then most of the Testers won't read it fully. Just they will read few lines and will execute it based on their own understanding which will be mostly wrong.
On the other hand, if it is having less details then it is difficult to execute it.
As of now, I am thinking below things for writing effective Test Cases.
Before start writing test cases, become familiar with the (AUT) Application Under Test. You will become familiar with Application by doing some adhoc/exploratory testing.
We should read the requirements clearly and completely. If we have any questions in the Requirements it should be clarified by appropriate person (e.g Customer or Business Team). And also, it is good practice to gather some basic domain knowledge before getting into reading requirements and writing Test Cases. And also, we can have discussion/meeting with developers/business team
Very Important thing is, we should use only simple language or style to write the Test cases so that any one can easily understand without any ambiguity
Give meaningful and easily understandable Test case ID/number.
For example, if you are writing Test case for testing Login module you can Test Case ID as below.
1a - for testing positive scenario such as entering valid username and valid password.
1b - for testing negative scenario such as entering invalid username and invalid password.
By giving Test Case number as above instead of giving sequential number, we can easily add any new case such as below one without needing to adjust/modify number of any other subsequent test cases.
1c- for testing negative scenario such as entering valid username and invalid password.
And also, if we have any similar module we can give separate sequence number for specifying the module.
For example, assume that we are having separate login modules for User and Admin with little changes.
In this case we can give number as below,
1.1-First case in User module.
1.2-Second case in User module.
2.1-First case in Admin module
2.2-Second case in Admin module.
If Test Description/Steps/Expected Results of 2.1 is mostly same as 1.1 then we should refer 1.1 in 2.1 instead writing the sample details again.
By doing like this, we can avoid redundant details to have clear test case document.
Test Description should be short and it should uniquely represent the current test scenario without any ambiguity.
In any situation, don't use "if condition" in the Test steps. Always address only one scenario in one test case. It will be helpful to have unambiguous Expected Result.
Give some sample test data that will be useful for executing the test cases.
If the Test Case requires any Preconditions/prerequisite don't forget to mention them.
The best way is, we should arrange/order the Test Cases such that the need for specifying precondition is minimum.
For example, we need to write test case for testing user creation, user modification and user deletion.
For doing user modification and user deletion we should have already created user as precondition.
If we arrange the Test cases in below mentioned order, we can avoid the need for specifying any preconditions/prerequisites.
1-Test case for creating user.
2-Test case for verifying duplicate/existing user when adding another user with same username.
3-Test case for modifying user.
4-Test case for deleting user.
Keep Traceability Matrix to make sure that we have written test cases for covering all requirements.
Once after completing all positive scenarios, think about all possibilities of negative scenarios to have test cases which will effectively find most of the bugs.
For doing this we can refer alternate flow section of use case document, and we can think about different data, boundary conditions, different navigations paths and multi user environment.
In the test case document, we can give link to screenshots explaining the steps and/or expected results with pictures. But anyway, it is not good practice to place the screenshots within the Test Case document itself unless it is very essential
Many tools are available to capture the screenshots with user action as video. We can make use of them to keep video explaining the steps and expected results clearly in case the test case requires any complex steps. We can give link to this video from the test case document.
Tips and Tricks for doing AdHoc Testing
It is always not possible to follow proper testing such as writing Test Plan and writing Test cases.
In some cases we may need to go with adHoc Testing. because of time constraint or resource constraint.
AdHoc Testing is the part of Exploratory testing.
It is done without doing Planning and Documentation.
Adhoc testing will help to find the defects earlier. We know that earlier a defect is found the cheaper it is to fix it.
Here I am listing some tips for doing adhoc testing effectively.
In case of UI (User Interface) testing, test all navigation including Back button navigation.
Go thro' all the pages of the application to find any broken links and also make sure that each and every page is having proper links to reach other pages either directly or indirectly.
Check whether all the images are having alt attribute
See the the application screen or webpage by changing/setting different screen resolution in your computer monitor
Test the webpage in many different web browsers such as Internet Explorer, FireFox, chrome, safari, etc.
Test the tab order and default focus in all the pages
Try to enter/save test data having special characters such as single quotes, double quotes and comma
You can try to enter text with HTML tags such as < and > also in the textbox
Try to load a authenticated webpage directly by entering url in the browser without doing login
Try all the possibilities of boundary values such entering lot of data in textbox and entering negative values in numeric fields.
Remember to navigate the application from two different machines/browsers simultaneously, especially concentrate to test concurrent database saving/updating operation.
If possible/necessary, test the application in different OS (Operating System)
If your webpage uses flash files, try to see the behavior of your webpage when it is loaded in a machine which is not having flash player.
Instead of testing everything from your local machine, just try to test some screens by hosting your site in some remote machine. It will help to identify unexpected issues which may occur due to network latency.
Test Session timeout, Cookie expiry and script execution timeout.
Try to refresh your confirmation screen many times to verify whether the multiple refresh saves/inserts the data multiple times.
Test with different Date and Time format if you webpage/application has date and time entry fields. And, think about Time zone also.
Make sure that Number/Currency/Name format is correctly displayed in all pages uniformly.
When testing edit/modify/update feature, modify values of all fields and make sure that everything is getting updated correctly.
Whenever testing Delete feature make sure that all the related data also getting deleted. For example, when deleting question and answers also will be deleted.
And, make sure that necessary constraints are enforced correctly. For example, deletion of questions should not be allowed if the questions are already used in some other modules.
Best practices in Software Testing
There are lot of materials available in internet to explain best practices in Software Testing.
Here I am writing only the very essential things for medium level projects based on my experience/view point.
We should start our testing activities at beginning of Software development itself.
Understanding Scope/purpose of the project will help to judge the degree/level of testing required.
Testers should go thro' the requirements in detail without missing any points given by the client before writing test cases.
The test cases should be updated immediately once the client gives new requirement or changes the requirements.
The test case document should cover all the requirements even if some requirements are non-testable. These non-testable items should be marked as non-testable. Keeping traceability matrix document will helpful to achieve this.
The Test case document should help to clearly identify hierarchy/arrangement of test cases. It should have clear approach to arrange test cases if many test cases exist with similar steps. It is not advisable to copy & paste the similar test cases many times, instead we can specify only the additional/different steps.
Description of each test case should be written clearly after understanding the context/module of description. Steps should be written only after manually executing them. Expected results should not have any ambiguity. If required, Prerequisite/preconditions should be mentioned.
Planning and creating test plan document is essential even for small short-term projects. The test plan document need not contain all the details, but it should contain at least very basic components such as scope,schedule, risks, environments, testers
Planning of development/test/staging environments should be done clearly. And it is very important to move the code and maintain version of code in each environment without any ambiguity/confusion. Testers should know which version of code/data is available in each environment
Test execution should be done carefully based on the test cases. It is very important to use appropriate test data. It is better to create different set of test data during test case creation itself. The test data should cover valid format,invalid format and boundary values.
Test result(pass/fail) should be clearly updated for each test case. It is good practice to mention Actual behavior if the test case fails.
The test results should be communicated to the other parties (developers,business/client) daily even if all the test cases are not executed. In this case, we should add a note to indicate that the test execution is still in progress.
The test execution summary document/mail should clearly mention date of execution, environment, test name and test result.
In case, most of test cases are getting failed continuously, there is no meaning of continuing the execution. Execution should be resumed once after fixing the major issues.
It will be nice if we highlight the testing status (pass,fail,yetToStart) in appropriate color. But anyway, just highlighting the test case with appropriate color without specifying status is not a good practice. Because while taking single color printout of the test report, it is difficult to see the status from the color.
It is good practice to do some adhoc testing in addition to the test case execution.
Clear/proper communication/co-ordination within the Testing team and also with other teams (developers, client/business)is very essential.
The bug report should be prepared very clearly with all essential details, especially with the steps/testdata for reproducing the bug. The bug report should help the developers to reproduce the bug and to fix it.
Doing re-test and small regression test is essential whenever a reported bug is fixed
It is not good if we do all the testing manually, as manual testing will take more time/effort and it is difficult to manage, and also it not consistent or repeatable. So it is better to automate the test cases using test tools such as QTP(Quick Test professional). Even we can use simple shell scripts and vbscript to automate some part of the testing.
Friday, March 26, 2010
testing School---How to do System Testing
Testing the software system or software application as a whole is referred to as System Testing of the software. System testing of the application is done on complete application software to evaluate software's overall compliance with the business / functional / end-user requirements. The system testing comes under black box software testing. So, the knowledge of internal design or structure or code is not required for this type of software testing.
In system testing a software test professional aims to detect defects or bugs both within the interfaces and also within the software as a whole. However, the during integration testing of the application or software, the software test professional aims to detect the bugs / defects between the individual units that are integrated together.
During system testing, the focus is on the software design, behavior and even the believed expectations of the customer. So, we can also refer the system testing phase of software testing as investigatory testing phase of the software development life cycle.
At what stage of SDLC the System Testing comes into picture:
After the integration of all components of the software being developed, the whole software system is rigorously tested to ensure that it meets the specified business, functional & non-functional requirements. System Testing is build on the unit testing and integration testing levels. Generally, a separate and dedicated team is responsible for system testing. And, system testing is performed on stagging server.
Why system testing is required:
It is the first level of software testing where the software / application is tested as a whole.
It is done to verify and validate the technical, business, functional and non-functional requirements of the software. It also includes the verification & validation of software application architecture.
System testing is done on stagging environment that closely resembles the production environment where the final software will be deployed.
Entry Criteria for System Testing:
Unit Testing must be completed
Integration Testing must be completed
Complete software system should be developed
A software testing environment that closely resembling the production environment must be available (stagging environment).
System Testing in seven steps:
1.Creation of System Test Plan
2.Creation of system test cases
3.Selection / creation of test data for system testing
4.Software Test Automation of execution of automated test cases (if required)
5.Execution of test cases
6.Bug fixing and regression testing
7.Repeat the software test cycle (if required on multiple environments)
Contents of a system test plan: The contents of a software system test plan may vary from organization to organization or project to project. It depends how we have created the software test strategy, project plan and master test plan of the project. However, the basic contents of a software system test plan should be:
- Scope
- Goals & Objective
- Area of focus (Critical areas)
- Deliverables
- System testing strategy
- Schedule
- Entry and exit criteria
- Suspension & resumption criteria for software testing
- Test Environment
- Assumptions
- Staffing and Training Plan
- Roles and Responsibilities
- Glossary
How to write system test cases: The system test cases are written in a similar way as we write functional test cases. However, while creating system test cases following two points needs to be kept in mind:
- System test cases must cover the use cases and scenarios
- They must validate the all types of requirements - technical, UI, functional, non-functional, performance etc.
As per Wikipedia, there are total of 24 types of testings that needs to be considered during system testing. These are:
GUI software testing, Usability testing, Performance testing, Compatibility testing, Error handling testing, Load testing, Volume testing, Stress testing, User help testing, Security testing, Scalability testing, Capacity testing, Sanity testing, Smoke testing, Exploratory testing, Ad hoc testing, Regression testing, Reliability testing, Recovery testing, Installation testing, Idem potency testing, Maintenance testing, Recovery testing and failover testing, Accessibility testing
The format of system test cases contains:
Test Case ID - a unique number
Test Suite Name
Tester - name of tester who execute of write test cases
Requirement - Requirement Id or brief description of the functionality / requirement
How to Test - Steps to follow for execution of the test case
Test Data - Input Data
Expected Result
Actual Result
Pass / Fail
Test Iteration
Thursday, March 25, 2010
Testing School---Features that to be tested in OS Commerce website
Although osCommerce is still in its development stage, the available Milestone releases are considered to be stable with the following features:
General Functionality
Compatible with all PHP 4 versions
All features enabled by default for a complete out-of-the-box solution
Object oriented backend (3.0)
Completely multilingual with English, German, and Spanish provided by default
Setup / Installation
Automatic web-browser based installation and upgrade procedure
Design / Layout
Template struture implementation to:
allow layout changes to be adaptive, easy, and quickly to make (3.0)
allow easy integration into an existing site (3.0)
Support for dynamic images
Administration / Backend Functionality
Supports unlimited products and categories
Products-to-categories structure
Categories-to-categories structure
Add/Edit/Remove categories, products, manufacturers, customers, and reviews
Support for physical (shippable) and virtual (downloadable) products
Administration area secured with a username and password defined during installation
Contact customers directly via email or newsletters
Easily backup and restore the database
Print invoices and packaging lists from the order screen
Statistics for products and customers
Multilingual support
Multicurrency support
Automatically update currency exchange rates
Select what to display, and in what order, in the product listing page
Support for static and dynamic banners with full statistics
Customer / Frontend Functionality
All orders stored in the database for fast and efficient retrieval
Customers can view their order history and order statuses
Customers can maintain their accounts
Addressbook for multiple shipping and billing addresses
Temporary shopping cart for guests and permanent shopping cart for customers
Fast and friendly quick search and advanced search features
Product reviews for an interactive shopping experience
Forseen checkout procedure
Secure transactions with SSL
Number of products in each category can be shown or hidden
Global and per-category bestseller lists
Display what other customers have ordered with the current product shown
Breadcrumb trail for easy site navigation
Product Functionality
Dynamic product attributes relationship
HTML based product descriptions
Automated display of specials
Control if out of stock products can still be shown and are available for purchase
Customers can subscribe to products to receive related emails/newsletters
Payment Functionality
Accept numerous offline payment processing (cheque, money orders, offline credit care processing, ..)
Accept numerous online payment processing (PayPal, 2CheckOut, Authorize.net, iPayment, ..)
Disable certain payment services based on a zone basis
Shipping Functionality
Weight, price, and destination based shipping modules
Real-time quotes available (UPS, USPS, FedEx, ..)
Free shipping based on amount and destination
Disable certain shipping services based on a zone basis
Tax Functionality
Flexible tax implementation on a state and country basis
Set different tax rates for different products
Charge tax on shipping on a per shipping service basis
Friday, March 5, 2010
Testing School---The complexity of testing
Using the following example we can explain why testing is so complicated.
You have a screen that contains one field only, named "username". The properties of the field are:
The field can contain only 6 alpha numeric characters and you have a keyboard with only alpha numeric keys.
The field is case sensitive.
Q: How many tests can you make in order to say that the screen is 100% bug free? Take a minute or two to answer this to yourself before you continue reading.
Well, the answer will be 56,800,235,586 (56 billion tests). Now, lets try to explain the answer:
The field contains:
6 places
The keyboard contains:
26 lower case chars
26 upper case chars
10 digits
We have 62 options for each place (the sum of keyboard options: 26 lower case chars + 26 upper case chars + 10 digits). But we have 6 places in the field so for each field we have 62 options and that means that the number of tests is:
62*62*62*62*62*62=62^6 = 56,800,235,584
Yes, this is the correct number of tests we need to perform in order to have 100% bugs free.
But we need to add 2 more tests:
one test is not to insert any value to the field.
the second test is to insert 7 chars to the field.
So the total number of tests is: 56,800,235,586.
If we will do 500 tests a day it will take us 311,234 years (if we will not jump from the window before).
Testing School---All About Bugs and its reporting with Example
A bug (also named 'fault' or 'defect') is an unexpected behavior in the software. A bug can be something the software should do but it doesn't do it or doesn't do it in the right way. A bug can also be that the software does something it shouldn't do. Usually, users don't like bugs. Occurrence of bugs, especially critical ones can cause users to stop working with the software.
Why bugs occur?
There are several reasons that can cause a bug. Let us mention some of them:
Unfinished requirements
Requirements that are not detailed enough
Requirements with multiple meanings
Logic errors in the design documents
Code errors
Not enough sufficient testing
Misunderstanding of user needs
Lack of documentation
Bug priority and severity?
We prioritize the bugs because we can't fix everything and we need to decide what we should fix first and that means that we prioritize what should be tested first. One parameter for prioritizing a bug is the bug's severity.
We define severity as the impact on the customer/end user. There are 4 common levels of severity:
Critical – software crash, hang, loss data, corrupt data
High – causes serious problems to the user
Medium – causes problem but the user can work around it
Low – trivial issues like spelling mistakes
Of course, you can add more severity levels as long as you well define each severity.
As testers we can add suggestions. A suggestion is not a bug. It is a request or a change that we think will give some added value to the system. If the project management will decide to add it then it will become a new requirement that will be tested as other requirements.
Let's take for example an office word software and review the bugs severity. A critical severity bug will be if the system will crash every time we send a document to the printer. A high severity bug will be if "Save" and "Save As" functions will not work. We understand that this unworking feature has a high impact on the end user. A medium severity bug will be if the "Save" function does not work, but the "Save As" function works well. The user can't do "Save" but he can work around it using the "Save As" function. A low severity bug will be if instead of "File/Send To" it will be written: "File/Sent To".
How to report a bug?
Why is it so important to report a bug?
Reporting a bug helps the programmer to reproduce the bug and to solve it.
It helps us to retest and see if the bug was fixed and if the fix is correct because we know what the bug was and how to try to reproduce it.
This is our product. We know how to find bugs. This is how we can show our knowledge and value to the project management.
You need to report a bug in a certain way. What can happen if we will report the bug in an improper way?
The programmer will not be able to reproduce it and fix it.
We will not be able to test it or our testers coworkers will not be able to retest it again because they will not understand and we will not remember what we meant 3 month ago when we report it.
The programmers might think we are not professional testers because they can't reproduce the bug.
The ping pong game – a programmer deliver to the tester because he doesn't understand the bug, and the tester deliver back to the programmer until the bug is fixed or it is thrown away.
Steps that are needed to be done before you report a bug in the bug tracker system:
Try to reproduce the bug at least one more time.
Search the bug in the bug tracking software. If it exists, check if you can add valuable information, if it doesn't exist, you need to report it as a new bug.
Steps to write a bug:
Write a short and meaningful title which describes the bug.
Write a description of the bug, what should be the correct result, the error message and any information that can help the programmer to isolate the bug.
Write how to reproduce the bug by breaking steps logically:
Straightforward path to trigger the bug.
Describe the steps.
Check that the bug is reproducible
Very bad example:
I can’t login to the bank account. Try to go to the account and you will not be able to do so.
Good example:
I am unable to log in to the web site.
Open IE 6.5 on window XP.
Enter the bank account URL (www.bank.co.il) in the location bar and hit enter.
Insert "user1" in login name field.
Insert "123456" in password field.
Click “login”.
Result: nothing happens, the system does nothing.
It should log you in because this username and password are exist in the database.
It was a very high level description of how to report a bug. Usually we will add more information such as:
Short but clear and explanatory short description – the reader should know the essence of the bug from it.
Enviroment setup - in here we will add the related configuration dimensions such as: site, setup, all software used and versions, all hardware used etc.
Full description of the bug - try not to repeat the short description of the bug. In here we will explain how the bug affects the end user, if it is not trivial, and what cannot the user do due to the bug. In here we will also explain if the bug has a traffic affecting, feature blocker, management affecting, test stopper etc. In addition we will need to write if we can work around the bug and if so, to explain in details how to work around it.
Full scenario for reproduction the steps made by the tester – attach the test case from the STD. Here you should also attach all links or relevant files, images, core files and logs.
Remember that many people need to read and understand your bug so you need to give any information that can help them to understand the bug, to reproduce it and to fix it.
Because many people read it, try to write it clearly and simple as you can and don't forget to use the speller (I usually write all bug info in word document, fix all typos and then copy it to the QC/TD/bug tracker system).
Testing School---Boundary Values Testing with example
“Boundary Values Testing” is a method that tests the boundary whether it's an input, an output or a performance boundary. Our tests focus on the boundaries values and not on the entire range of data. We will use it when we have a field that can contain a range of values as an input, an output or as a requirement.
How to use “Boundary Values Testing”?
If you have a range: (a to b), you will test the following:
Test Case
Value
Expected Result
1
a-1
Invalid
2
a
Valid
3
a+1
Valid
4
b-1
Valid
5
b
Valid
6
b+1
Invalid
According to the ISTQB in "Boundary Values Testing" we only test the following:
Test Case
Value
Expected Result
1
a-1
Invalid
2
a
Valid
3
b
Valid
4
b+1
Invalid
Why ISTQB has less test cases, because we can say that if 'a' and 'a-1' are working well, then we can assume that 'a+1' is working well. The same claim will be about the upper bound. If 'b' and 'b+1' are working well, then we can assume that 'b-1' is working well.
Any option of implementing the "Boundary Values" method you will use is good. Instead of testing the entire range, you can tests 6 or 4 cases and still have confiedence that the software works well.
Here are some more rules from my experience that you can take in consideration:
Always test 0 if it is inside the range and sometimes even if it's out of range because 0 has special effect on software (like dividing in 0).
Always test the empty string if it is inside the range and sometimes even if it's out of range.
Sometimes you can test a value that exists inside the range and not in the boundary just in case…(It allows you to sleep deeper at night…).
Now, let us practice the "Boundary Values Testing" method.
Practice 1
You are testing inventory software that contains a field of the quantity of items. The field can contain any value between 10 to 100 units.
What is the max number of test cases you will need in order to test the field?
What is the minimum number of test cases you will need in order to test the field, using boundary testing?
Try to solve it and then continue to read the answer.
Practice 1 - answer
The field contains a range of 10 to 100 (10-100).
The max number of test cases you will need in order to test the field will be 93 test cases (9, 10, 11, 12…97, 98, 99, 100, 10) or 94 test cases if we include the value of 0.
The minimum number of test cases you will need in order to test the field, using boundary testing will be 6 test cases (9, 10, 11, 99, 100, 101) or 7 test cases if we include 0.
Note that we need to add more tests like alphabetic chars and special chars like %, *.
Practice 2
You have a password field that can contain up to 8 characters and must have at least 3 characters. What is the minimum number of test cases you will need in order to test the field? (Pay attention to the requirement that specifies the field's length and not what kind of chars it can get! In the real world we can't ignore it but in order to simplify the example we will ignore it).
Try to solve it and then continue to read the answer.
Practice 2 - answer
6 test cases: length 2, length 3, length 4, length 7, length 8, length 9 or 7 test cases Length 2, length 3, length 4, length 6, length 7, length 8, length 9. We can add a test case that test the empty string.
Practice 3
You have a field that can contain up to 5 digits and can have 0 digits. The value of the field can be in the range of (-5148 to +3544) What is the minimum number of test cases you will need in order to test the field using "Boundary" testing?
Try to solve it and then continue to read the answer.
Practice 3 - answer
For the length we have 5 test cases: 0 length, 1 length, 4 length, 5 length and 6 length.
For the range we have 7 test cases: -5149, -5148, -5147, 0, +3543, +3544, +3545.
Total of 12 tests cases.
we reduce the amount of test cases to be less then 12 (try to solve it and continue to read)?
Well, we can reduce it to 10 test cases by combine the value testing of the field with the length testing of the field.
Case # Length Value Expected Result
1 0 None Valid
2 1 0 Valid
3 4 3543 Valid
4 4 3544 Valid
5 4 3545 Invalid
6 5 -5149 Invalid
7 5 -5148 Valid
8 5 -5147 Valid
9 6 123456 Invalid
10 6 -45322 Invalid
This was an example of why you must use your head all the time, because sometimes you can combine methodologies with each other or combine them with your common sense and reduce the amount of testing or create smart tests that will reveal beautiful and important bugs.
This example will work only if you know a little about how the programmer implemented the code (Gray box testing). If the programmer validate the length and the value together, then the reduction to 10 test cases is good.
Testing School---Equivalence Class Partitioning with example
We define "Equivalence Class Partitioning" as a method that can help you derive test cases. You identify classes of input or output conditions. The rule is that each member in the class causes the same kind of behavior of the system. In other words, the "Equivalence Class Partitioning" method creates sets of inputs or outputs that are handled in the same way by the application.
Another definition taken from Wikipedia:
"A technique in black box testing. It is designed to minimize the number of test cases by dividing tests in such a way that the system is expected to act the same way for all tests of each equivalence partition. Test inputs are selected from each class. Every possible input belongs to one and only one equivalence partition."
Why learn "Equivalence Class Partitioning"?
This method drastically reduces the number of test cases that are required to be tested because we don't have time, money or manpower to test everything. In addition, it can help you find many errors with the smallest number of test cases.
How to use "Equivalence Class Partitioning"?
There are 2 major steps we need to do in order to use equivalence class partitioning:
Identify the equivalence classes of input or output. Take each input's or output's condition that is described in the specification and derive at least 2 classes for it:
One class that satisfies the condition – the valid class.
Second class that doesn't satisfy the condition – the invalid class.
Design test cases based on the equivalence classes.
Example 1
In a computer store, the computer item can have a quantity between -500 to +500. What are the equivalence classes?
Answer: Valid class: -500 <= QTY <= +500
Invalid class: QTY > +500
Invalid class: QTY < -500
Example 2
In a computer store, the computer item type can be P2, P3, P4, and P5 (each type influences the price). What are the equivalence classes?
Answer: Valid class: type is P2
Valid class: type is P3
Valid class: type is P4
Valid class: type is P5
Invalid class: type isn’t P2, P3, P4 or P5
Practice
Bank account can be 500 to 1000 or 0 to 499 or 2000 (the field type is integer). What are the equivalence classes?
Try to solve it before reading the answer.
Practice 1 - answer
Valid class: 0 <= account <= 499
Valid class: 500 <= account <= 1000
Valid class: 2000 <= account <= 2000
Invalid class: account < 0
Invalid class: 1000 < account < 2000
Invalid class: account > 2000
Equivalence Class Vs Boundary Testing
Let us discuss about the difference between Equivalence class and boundary testing. For the discussion we will use the practice question:
Bank account can be integer in the following ranges: 500 to 1000 or 0 to 499 or 2000. What are the equivalence classes?
Answer:
valid class: 0 <= account <= 499
valid class: 500 <= account <= 1000
valid class: 2000 <= account <= 2000
invalid class: account < 0
invalid class: 1000 < account < 2000
invalid class: account > 2000
In equivalence class, you need to take one value from each class and test whether the value causes the system to act as the class' definition. It means that in this example, you need to create at least 6 test cases – one for each valid class and one for each invalid class.
How many test cases will be, if you use boundary testing?
The following table shows how much test cases will be using "Boundary Testing" method:
Test Case #
Value
Result
1
-1 Invalid
2
0 Valid
3
1 Valid
4
498 Valid
5
499 Valid
6
500 Valid
7
501 Valid
8
999 Valid
9 1000 Valid
10 1001 Invalid
11 1999 Invalid
12 2000 Valid
13 2001 Invalid
In boundary testing, you need to test each value in the boundary and you know the value, you don't need to choose it from any set. In this example you have 13 test cases.
Now, let us exam how to combine this 2 methods together.
The following table shows all the boundary testing values and their equivalence classes:
#
Boundary Value
Equivalence Class
Result
1
-1 account < 0 Invalid
2
0 0 <= account <= 499 Valid
3
1 0 <= account <= 499 Valid
4
498 0 <= account <= 499 Valid
5
499 0 <= account <= 499 Valid
6
500 500 <= account <= 1000 Valid
7
501 500 <= account <= 1000 Valid
8
999 500 <= account <= 1000 Valid
9 1000 500 <= account <= 1000 Valid
10 1001 1000 < account < 2000 Invalid
11 1999 1000 < account < 2000 Invalid
12 2000 2000 <= account <= 2000 Valid
13 2001 account > 2000 Invalid
Now, we can reduce some of the test cases that belong to the same equivalence class. We can delete lines 3 and 4 which belong to equivalence class "0 <= account <= 499". We also can delete lines 7 and 8 hich belong to "500 <= account <= 1000". The new table will be:
# Boundary Value Equivalence Class Result
1 -1 account < 0 Invalid
2 0 0 <= account <= 499 Valid
5 499 0 <= account <= 499 Valid
6 500 500 <= account <= 1000 Valid
9 1000 500 <= account <= 1000 Valid
10 1001 1000 < account < 2000 Invalid
11 1999 1000 < account < 2000 Invalid
12 2000 2000 <= account <= 2000 Valid
13 2001 account > 2000 Invalid
You can even reduce more test cases although in my opinion, it is important to keep this table because it keeps a hard connection to the boundary testing. You can see in the table that I didn't reduce those test cases that are touch in the boundary itself of each range.
Let's reduce more test cases (just for the fun and for the practice (test case 5, 9 and 10):
# Boundary Value Equivalence Class Result
1 -1 account < 0 Invalid
2 0 0 <= account <= 499 Valid
6 500 500 <= account <= 1000 Valid
11 1999 1000 < account < 2000 Invalid
12 2000 2000 <= account <= 2000 Valid
13 2001 account > 2000 Invalid
Now, in this table, for each equivalence class, you choose one value that belongs to boundary testing.
A smart man once told me that when I write a test case and I using equivalence class partitioning, not to write specific values. Instead, he told me to write the classes and their expected results. By that, each time a tester will run the test case he will choose new candidates from each class. Using that working method we can promise that each running will contain new values.
Testing School---All About Test Matrix....
Matrices provide an easy structure for testing common issues. A common issue is an issue that repeats itself from project to project. Testing a common issue should be relevant to the project itself.
Examples for common issues are:
Fields (integer, dates, time, etc)
File names
Printing
Saving a file
Deleting a file
Sending a file
Login process
UI issues
Other
Why is it important to learn "Test Matrix"?
There are several reasons why to learn and to do test matrices.
It can reduce working time - once you have a matrix for a specific issue, it will take you less time to test it or to think how to test it.
It is logical and testing challenging - It is a challenge to make your own matrix and to find common issues that are repeatable from project to project.
In future projects it will save you time and it can give your more time to handle more complex issues.
It’s fun (for those who have a testing mania like the writer).
How to Do a Test Matrix?
The algorithm for creating a test matrix is:
Find an issue that repeats itself from project to project
Think of tests that you routinely perform on this issue
Sort all the tests and put them in a matrix
Example
Let’s build a matrix for integer field. First, we will think and write all the tests that we can perform on an integer field:
0
Valid value
Lower boundary – 1
Lower boundary
Lower boundary + 1
Upper boundary – 1
Upper boundary
Upper boundary + 1
Nothing
Negative value
Special chars: < > ? , . / ; : ‘ “ [ ] { } \ | + = _ - ( ) * & ^ % $ # @ ! ~ `
Uppercase chars
Lowercase chars
Spaces
Leading spaces before the value
Value follows with spaces
Length lower boundary – 1
Length lower boundary
Length lower boundary + 1
Length upper boundary – 1
Length upper boundary
Length upper boundary + 1
Mix of digits, chars and spaces
…
Now we will insert them into a generic matrix:
Integer field matrix
Field Name
Cases
0
Valid value
Lower boundary – 1
Lower boundary
Lower boundary + 1
Upper boundary – 1
Upper boundary
Upper boundary + 1
Nothing
Negative value
Special chars: < > ? , . / ; : ‘ “ [ ] { } \ | + = _ - ( ) * & ^ % $ # @ ! ~ `
Uppercase chars
Lowercase chars
Spaces
Leading spaces before the value
Value follows with spaces
Length lower boundary – 1
Length lower boundary
Length lower boundary + 1
Length upper boundary – 1
Length upper boundary
Length upper boundary + 1
Mixed of digits, chars and spaces
…
Now let use it for a project that contains 2 integer fields: “Age”, “Price”. For each field we will mark those cases we want to test.
Integer field matrix
Field Name Age Price
Cases
0 X X
Valid value X
Lower boundary – 1 X
Lower boundary X
Lower boundary + 1
Upper boundary – 1
Upper boundary X
Upper boundary + 1 X
Nothing
Negative value X
Special chars: < > ? , . / ; : ‘ “ [ ] { } \ | + = _ - ( ) * & ^ % $ # @ ! ~ ` X X
Uppercase chars
Lowercase chars
Spaces X
Leading spaces before the value
Value follows with spaces
Length lower boundary – 1 X
Length lower boundary X
Length lower boundary + 1
Length upper boundary – 1 X X
Length upper boundary X
Length upper boundary + 1
Mixed of digits, chars and spaces X
…
This method gives you the power to change in each test cycle the cases that need to be tested for those fields, without investing many efforts on changing the STD document.
Practice 1
Create a matrix for saving a file. Try to solve it before continue reading.
Practice 1 - answer
Save a new file
Save a file with an existing file name
Save in another format
Save a file to a full disk
Save a file to a write protected disk
Save a file to a remote disk
Save a large file and during the saving process print the file
…
Practice 2
Create a matrix for a date field. Try to solve it before continue reading.
Practice 2 - answer
Insert chars
Insert numbers
Insert day/month 0
Insert day 32
Insert month 13
Insert year 90 (means 1990, 2090 ???)
Insert other format of dates:
24/12/1978 and 12/24/1978
Insert 16/9/2006 and 16.9.2006
…
Practice 3
Create a matrix for login a system: username and password. Try to solve it before continue reading.
Practice 3 - answer
Correct username and wrong password
Wrong username and correct password
Wrong username and wrong password
Correct username and correct password
Correct username and password like ‘select 1’
Uppercase and lowercase
…
Practice 4
Create a matrix for a char field with length x. Try to solve it before continue reading.
Practice 4 - answer
Similar to the integer example
Practice 5
Create a matrix for deleting a file. Try to solve it before continue reading.
Practice 5 - answer
Delete a file
Delete a very large file
Delete an empty file
Delete an empty folder
Delete a folder with many files
Delete an open file
Delete a file while sending other files to the printer
…
Practice 6
Create a matrix for testing an email field. Try to solve it before continue reading.
Practice 6 - answer
Insert a valid mail: a.a@a.com, a@123.co.il
Insert an invalid mail format
Insert chars
Insert numbers
Insert a very long email
…
Practice 7
Create a matrix for testing a new screen that will be insert into the application. Try to solve it before continue reading.
Practice 7 - answer
Test typo in screen title, buttons, fields and tables.
Test that keyboard shortcuts work well when buttons are enable and lock when buttons are disabled.
Check sorting at least 3 times on each column.
If you have a search criteria, combine a search with several fields using a method named "all pairs".
Create a test where the search is bring you one row, 2 rows, more then 2 rows and 0 rows.
Create a test for clear the search (brings all rows).
Create a search where in string/text field you are trying to insert a word contains the sign ' - if the programmer didn't handle it and you are working with XML files or with databases, you will get an error.
If you can search by "starting with" or "contains with" test spaces and '.
Test that each button is working as expected.
Test the paging feature if you screen support in regular view and in searching results - sometimes there is a bug that paging in search result will bring the entire rows.
Ttest the "yes", "no", "cancel" options in message box.
Try to active/inactive a radio button and a check box field.
Test that mandatory fields are really mandatory and that the message about it is correct.
Test the close window button.
Thursday, February 25, 2010
Testing School--Testing Via Equivalence Partitioning
Equivalence partitioning is the process of defining the optimum number of tests by:
· reviewing documents such as the Functional Design Specification and Detailed Design Specification, and identifying each input condition within a function,
· selecting input data that is representative of all other data that would likely invoke the same process for that particular condition.
Defining Tests
A number of items must be considered when determining the tests using the equivalence partitioning method, including:
· All valid input data for a given condition are likely to go through the same process.
· Invalid data can go through various processes and need to be evaluated more carefully. For example,
· a blank entry may be treated differently than an incorrect entry,
· a value that is less than a range of values may be treated differently than a value that is greater,
· if there is more than one error condition within a particular function, one error may override the other, which means the subordinate error does not get tested unless the other value is valid.
Defining Test Cases
Create test cases that incorporate each of the tests. For valid input, include as many tests as possible in one test case. For invalid input, include only one test in a test case in order to isolate the error. Only the invalid input test condition needs to be evaluated in such tests, because the valid condition has already been tested.
EXAMPLE OF EQUIVALENCE PARTITIONING
Conditions to be Tested
The following input conditions will be tested:
· For the first three digits of all social insurance (security) numbers, the minimum number is 111 and the maximum number is 222.
· For the fourth and fifth digits of all social insurance (security) numbers, the minimum number is 11 and the maximum number is 99.
Defining Tests
Identify the input conditions and uniquely identify each test, keeping in mind the items to consider when defining tests for valid and invalid data.
The tests for these conditions are:
· The first three digits of the social insurance (security) number are:
1. = or > 111 and = or < 222, (valid input),
2. < 111, (invalid input, below the range),
3. > 222, (invalid input, above the range),
4. blank, (invalid input, below the range, but may be treated differently).
· The fourth and fifth digits of the social insurance (security) number are:
5. = or > 11 and = or < 99, (valid input),
6. < 11, (invalid input, below the range),
7. > 99, (invalid input, above the range),
8. blank, (invalid input, below the range, but may be treated differently).
Using equivalence partitioning, only one value that represents each of the eight equivalence classes needs to be tested.
Defining Test Cases
After identifying the tests, create test cases to test each equivalence class, (i.e., tests 1 through 8).
Create one test case for the valid input conditions, (i.e., tests 1 and 5), because the two conditions will not affect each other.
Identify separate test cases for each invalid input, (i.e., tests 2 through 4 and tests 6 through 8). Both conditions specified, (i.e., condition 1 - first three digits, condition 2 - fourth and fifth digits), apply to the social insurance (security) number. Since equivalence partitioning is a type of black-box testing, the tester does not look at the code and, therefore, the manner in which the programmer has coded the error handling for the social insurance (security) number is not known. Separate tests are used for each invalid input, to avoid masking the result in the event one error takes priority over another. For example, if only one error message is displayed at one time, and the error message for the first three digits takes priority, then testing invalid inputs for the first three digits and the fourth and fifth digits together, does not result in an error message for the fourth and fifth digits. In tests B through G, only the results for the invalid input need to be evaluated, because the valid input was tested in test case A.
Suggested test cases:
· Test Case A - Tests 1 and 5, (both are valid, therefore there is no problem with errors),
· Test Case B - Tests 2 and 5, (only the first one is invalid, therefore the correct error should be produced),
· Test Case C - Tests 3 and 5, (only the first one is invalid, therefore the correct error should be produced),
· Test Case D - Tests 4 and 5, (only the first one is invalid, therefore the correct error should be produced),
· Test Case E - Tests 1 and 6, (only the second one is invalid, therefore the correct error should be produced),
· Test Case F - Tests 1 and 7, (only the second one is invalid, therefore the correct error should be produced),
· Test Case G - Tests 1 and 8, (only the second one is invalid, therefore the correct error should be produced).
Other Types of Equivalence Classes
The process of equivalence partitioning also applies to testing of values other than numbers. Consider the following types of equivalence classes:
· a valid group versus an invalid group, (e.g., names of employees versus names of individuals who are not employees),
· a valid response to a prompt versus an invalid response, (e.g., Y versus N and all non-Y responses),
· a valid response within a time frame versus an invalid response outside of the acceptable time frame, (e.g., a date within a specified range versus a date less than the range and a date greater than the range).
Monday, February 22, 2010
Testing School---Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions.
It is a test case design technique that is performed once requirements have been reviewed for ambiguity, followed by a review for content.
Requirements are reviewed for content to insure that they are correct and complete. The Cause-Effect Graphing technique derives the minimum number of test cases to cover 100% of the functional requirements to improve the quality of test coverage.
There are four steps:
1.Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2.A cause-effect graph is developed.
3.The graph is converted to a decision table.
4.Decision table rules are converted to test cases.
The Cause-Effect Graphing technique was invented by Bill Elmendorf of IBM in 1973. Instead of the test case designer trying to manually determine the right set of test cases, he/she models the problem using a cause-effect graph, and the software that supports the technique, BenderRBT, calculates the right set of test cases to cover 100% of the functionality. The cause-effect graphing technique uses the same algorithms that are used in hardware logic circuit testing. Test case design in hardware insures virtually defect free hardware.
Cause-Effect Graphing also has the ability to detect defects that cancel each other out, and the ability to detect defects hidden by other things going right. These are advanced topics that won’t be discussed in this article.
The starting point for the Cause-Effect Graph is the requirements document. The requirements describe “what” the system is intended to do. The requirements can describe real time systems, events, data driven systems, state transition diagrams, object oriented systems, graphical user interface standards, etc. Any type of logic can be modeled using a Cause-Effect diagram. Each cause (or input) in the requirements is expressed in the cause-effect graph as a condition, which is either true or false. Each effect (or output) is expressed as a condition, which is either true or false
Testing School--Boundary value analysis and Equivalence partitioning
Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.
Equivalence Partitioning:
In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.
In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.
So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.
2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.
3) Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.
We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.
Equivalence partitioning uses fewest test cases to cover maximum requirements.
Boundary value analysis:
It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.
Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.
Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.
Boundary value analysis is often called as a part of stress and negative testing.
Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.
E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes.
This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.
Friday, February 19, 2010
TESTING SCHOOL--Why I LOVE TESTING
I love it when I'm the first person to touch new software and root around for errors.
I like to teach and mentor people that will someday be better and more successful than I am.
I like to hang out with people that do what I do and argue amicably about technique and methodologies.
I like learning about other environments and companies and how people in our field operate in those environments.
I like reading and learning from people who either think or present things differently than I do. Overall, I think I actually enjoy having to struggle just a bit to keep up, whether it's with my own workload or the ideas/work of someone else.
No two pieces of work are the same. It's intellectually challenging and mentally stimulating. It keeps the grey matter ticking over and that makes me happy.
It's given me the opportunity to meet and work with many different people/cultures - that's enriching in itself.
The SW testing profession is developing and evolving. Nothing stands still - it's a perfect time to be involved to help shape and influence.
Constant learning opportunities - and that's addictive!
As a child when you're given something you are told to play with it but 'be careful, don't break it'
As an adult we buy things and we 'be careful, don't break it'
As a tester WE GET AND PAID TO FIND THE BREAKS.
The thrill of finding a good bug.
Asking the questions that make everybody else sit back and muttering quietly under their breath '$hit, didn't even think of that'. The kind of questions that get project items pushed back because not everything has been taken into consideration.
Sounds kind of corny but in our own way, we help make the world a better place. We help make software more stable, easier to use, crash less, etc. Hopefully helping to make other peoples lives less stressful.
It is great to lead younger test professionals and direct and guide them into how to test better, what to test and so on... In the process they have ideas, thoughts, arguments similar or contrary to mine which stimulate me to learn and unlearn.
The challenge of doing more in less which testing is all about at most times. The challenge in motivating people, goading them, convincing them.... to achieve, despite heavy odds stacked against them.
I love testing because of the thrill it gives me when the client praises the testing team for their knowledge, for finding the bugs and making the user's life better and easier... And when this happens the joy and warmth I derive from the hand-shake with the testers who won the battle pleases me no end.
As I make my way back home and sometimes while the city sleeps, the looking-forward to the next day at work and the plans that begin to form in my mind is a sure sign that I am happy and content with what I do - managing a testing team that will do wonders and achieve!
Wednesday, February 10, 2010
Testing School---- Testing e-Commerce Website
The main Things that should be taken care of in the e-commerce website are:
The document lists the criteria according to which the products will be initially evaluated.
E-Commerce Functional Requirements
o Web Site Look & Feel
o Category Management
o Product Management
o Inventory management
o Coupons/Discounts/Gift Certificates
o Discount
o Shopping Cart
o Customer Management
o Order Processing
o General
o Support
o Marketing
o Administration
Non-Functional requirements
o System Management facilities (Manageability)
o Interface
o Interoperability
o Reliability
o Security
o Documentation
o Package
o Testability
Shopping Cart
Add/ update / delete products
Modify quantities
Modify product options (size, color,..etc)
Calculate totals
Calculate total weight
Select country
Calculate shipping cost
Accept promotion and gift codes
Calculate Bundled products promotions
Sales tax / VAT calculations
You can require a minimum order amount
Set a maximum order amount
Order Processing
Accept credit cards, & different payment methods
Accept Cash on delivery
Offline payment support to process credit cards manually
Multiple currency support
One Page checkout
Receive text message alerts when orders are placed by your customers
Ability to have line item order level management to display how much of each line item in an order is shipped or back ordered.
Batch Import Tracking Numbers
Change the status of your orders in batches
Auto-calculation of taxes
Customers choose shipping option
Email receipts sent to customer and administrator
Order saved in admin area for viewing
Each order is saved with a unique order number
Each order is saved in a certain order status based on the payment method
Batch order printing
Automatic store email receipts
Automatic shipping email receipts
View and process your orders online
Add to cart to see sales price feature