Thursday, May 20, 2010

Testing School----Types of Error

Types of errors with examples:
User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, inappropriate use of key board.

Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.

Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.

Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.

Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.

Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.

Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.

Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.

Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.

Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.

Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.

Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.

Wednesday, April 28, 2010

testing School---Key Word Driven Testing

Key Word Driven Testing:

The Keyword Driven framework consists of the basic components given below
1. Control File
2. Test Case File
3. Startup Script
4. Driver Script
5. Utility Script

1. Control File

a)Consists details of all the Test scenarios to be automated

b)User will be able to select a specific scenario to execute based on turning on or off a flag in the Control File

c) Control File is in the form of an excel worksheet and contains columns for Scenario ID,Execute (Y/N),Object Repository Path, Test Case File Path



2. Test Case File

a)Contains the detailed steps to be carried out for the execution of a test case

b) It is also in the form of an excel sheet and contains columns for Keyword, Object Name, Parameter



3. Startup Script

a) The Starup script is utilised for the initialization and reads the control files

b) It then calls the driver script to execute all the scenarios marked for execution in the control file



4. Driver Script

a) It Reads the Test Case files. Checks the keywords and calls the appropriate utility script functions based on specific keyword

b) Error Handling is taken care of in the driver script.


5. Utility Scripts

a) Perform generic tasks that can be used across applications. It should not be application dependent.

Advantage of Framework.

  • The main advantage of this framework is the low cost for maintenace. If there is change to any test case then only the Test Case File needs to be updated and the Driver Script and Startup script will remain the same.
  • No need to update the scripts in case of changes to the application.

Monday, April 26, 2010

Testing School---Troubleshooting- Before Reporting any bug?



Troubleshooting- Before Reporting any bug?



Troubleshooting of:

· What’s not working?

· Why it’s not working?

· How can you make it work?

· What are the possible reasons for the failure?



Answer for the first question “what’s not working?” is sufficient for you to report the bug steps in bug tracking system. Then why to answer remaining three questions? Think beyond your responsibilities. Act smarter, don’t be a dumb person who only follow his routine steps and don’t even think outside of that. You should be able to suggest all possible solutions to resolve the bug and efficiency as well as drawbacks of each solution. This will increase your respect in your team and will also reduce the possibility of getting your bugs rejected, not due to this respect but due to your troubleshooting skill.

Before reporting any bug, make sure it isn’t your mistake while testing, you have missed any important flag to set or you might have not configured your test setup properly.

Troubleshoot the reasons for the failure in application. On proper troubleshooting report the bug. I have complied a troubleshooting list. Check it out – what can be different reasons for failure.





Reasons of failure:


1) If you are using any configuration file for testing your application then make sure this file is up to date as per the application requirements: Many times some global configuration file is used to pick or set some application flags. Failure to maintain this file as per your software requirements will lead to malfunctioning of your application under test. You can’t report it as bug.

2) Check if your database is proper: Missing table is main reason that your application will not work properly.
I have a classic example for this: One of my projects was querying many monthly user database tables for showing the user reports. First table existence was checked in master table (This table was maintaining only monthly table names) and then data was queried from different individual monthly tables. Many testers were selecting big date range to see the user reports. But many times it was crashing the application as those tables were not present in database of test machine server, giving SQL query error and they were reporting it as bug which subsequently was getting marked as invalid by developers.

3) If you are working on automation testing project then debug your script twice before coming to conclusion that the application failure is a bug.

4) Check if you are not using invalid access credentials for authentication.

5) Check if software versions are compatible.

6) Check if there is any other hardware issue that is not related to your application.

7) Make sure your application hardware and software prerequisites are correct.

8) Check if all software components are installed properly on your test machine. Check whether registry entries are valid.

9) For any failure look into ‘system event viewer’ for details. You can trace out many failure reasons from system event log file.

10) Before starting to test make sure you have uploaded all latest version files to your test environment.


Thursday, April 22, 2010

Testing School---BVT (Build Verification Testing)

BVT (Build Verification Testing)
What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:
·Build validation
·Build acceptance
Some BVT basics:
·It is a subset of tests that verify main functionalities.
·The Bat’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
· The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
·Design BVTs carefully enough to cover basic functionality.
·Typically BVT should not run more than 30 minutes.
·BVT is a type of regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.
What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether – all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.
These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include test cases in your BVT automation suite:
· Include only critical test cases in BVT.
· All test cases included in BVT should be stable.
· All the test cases should have known expected result.
· Make sure all included critical functionality test cases are sufficient for application test coverage.
Also do not include modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.
Example: Test cases to be included in BVT for Text editor application (Some sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test cases for copy, cut, paste functionality of text editor
4) Test cases for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.
What happens when BVT suite run:
Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.
This process gets repeated for every new build.
Why BVT or build fails?
BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.
Tips for BVT success:
1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT result – automate everything.
5) Have some penalties for breaking the build some chocolates or team coffee party from developer who breaks the build will do.
Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, and resources and after all no frustration of test team for incomplete build.
Developers might be doing the unit and integration testing and not necessarily the BVT. BVT is most of the times done by test engineer. Once the build team deploys the test build on the test environments it’s the job of test engineer to perform BVT (sniff, sanity smoke etc…)
If you are able to test the application and are executing the test cases, it’s natural that the BVT has been passed. Else you wouldn’t have been able to test the application… It doesn’t matter if the testing is done manually or is automated using any tool. If the build has got a new feature how can you automate it?
Following is the process that you might follow:
Developer initiates a mail to build team (also marked to test team with the description of what to be tested in the new build) to make the build >> Build team makes the build and deploys it on test machines and replies all asking the test team to continue testing, else if build fails he replies so in the mail >> If the BVT fails tester replies to mail stating that the BVT failed along with the logs whatever are available, else continues testing.

Monday, April 19, 2010

Testing School---Some Tips about Testing

Some Important Tips about Testing
Importance of Software Testing

In Internet, We can see lot of articles explaining/listing loss made by poor low-quality software products.

For example, how will you feel if a bug in a bank software shows your bank balance as 0 instead of some thousands?
And if you are a student, what will you be your state if your marksheet shows your score as 0 instead of some good score?

Here, we will be feeling good if we see some notification (e.g Not able to show your balance due to some unexpected error/Couldn't print your marksheet because of unexpected issue)
instead of seeing wrong data.

Testing plays an important role to avoid these situations.

So we can say that testing is necessary/important even when it couldn't guarantee 100% error free software application.

And also,

- Cost of fixing the bug will be more if it is found in later stage than it is found earlier.

- Quality can be ensured by testing only. In the competitive market,only Quality product can exist for long time.

Testing will be necessary even if it is not possible to do 100% testing for an application.

One more important reason for doing testing is user/production environment will be completely different from development environment.

For example, a webpage developer may be using FireFox as browser for doing his webpage development. But the user may be using different browser such as Internet Explorer, Safari, Chrome and Opera.

The web page appearing good in FireFox may not appear good in other browsers (particularly IE). So ultimately, user will not be happy even if the developer puts more efforts to develop the webpage. As we know that Users satisfaction is more important for growth of any business, testing becomes more important.
So we can assume/treat the Testers as the representatives of the Users.


Basics of Quality Assurance (QA) in Sofware Development

Quality Assurance is the most important factor in any business or industry.
Same thing is applicable for Software development also.
Spending some additional money for getting high quality product will definitely give more profit.

But anyway, it is not true that expensive products are high-quality products. Even inexpensive product can be high-quality product if it meets Customer’s needs/expectation.

The quality assurance cycle consists of four steps: Plan, Do, Check, and Act. These steps are commonly abbreviated as PDCA.

The four quality assurance steps within the PDCA model are

Plan: Establish objectives and processes required to deliver the desired results.
Do: Implement the process developed.
Check: Monitor and evaluate the implemented process by testing the results against the predetermined objectives
Act: Apply actions necessary for improvement if the results require changes.


For getting appropriate quality output in software development we need to follow SQA (Software Quality Assurance) process in each phase (Planning, Requirement Analysis, Design, Development, Integration & Test, Implementation and Maintenance) of the software development lifecycle.

We should follow below solutions to avoid many software development problems.
Solid requirements - Clear, complete, attainable, detailed and testable requirements that are agreed by all players (Customer,developers and Testers).
Realistic schedules - Allocate enough time for planning, design,testing, bug fixing, re-testing and documentation.
Adequate testing - Start testing early, re-test after fixes/changes.
Avoid unnecessary changes in initial requirements once after starting the coding
Require walk-through and inspections

Writing Good Test Cases and Finding Bugs effectively
To develop bug free software application, writing good test cases is essential.

Here, we will see how to write good test cases.

Before seeing this, we should understand what is Good Test Case.

There won't be any solid definition for "Good Test Case".

I will consider a Test Case as "Good" only when a Tester feels happy to follow the steps in the Test Case which is written by another Tester.

Because, Test Cases will be useful only when it is used by the people.
If a test case is poorly written with excessive unwanted steps, then most of the Testers won't read it fully. Just they will read few lines and will execute it based on their own understanding which will be mostly wrong.

On the other hand, if it is having less details then it is difficult to execute it.

As of now, I am thinking below things for writing effective Test Cases.

Before start writing test cases, become familiar with the (AUT) Application Under Test. You will become familiar with Application by doing some adhoc/exploratory testing.
We should read the requirements clearly and completely. If we have any questions in the Requirements it should be clarified by appropriate person (e.g Customer or Business Team). And also, it is good practice to gather some basic domain knowledge before getting into reading requirements and writing Test Cases. And also, we can have discussion/meeting with developers/business team
Very Important thing is, we should use only simple language or style to write the Test cases so that any one can easily understand without any ambiguity
Give meaningful and easily understandable Test case ID/number.
For example, if you are writing Test case for testing Login module you can Test Case ID as below.

1a - for testing positive scenario such as entering valid username and valid password.
1b - for testing negative scenario such as entering invalid username and invalid password.

By giving Test Case number as above instead of giving sequential number, we can easily add any new case such as below one without needing to adjust/modify number of any other subsequent test cases.

1c- for testing negative scenario such as entering valid username and invalid password.
And also, if we have any similar module we can give separate sequence number for specifying the module.

For example, assume that we are having separate login modules for User and Admin with little changes.
In this case we can give number as below,
1.1-First case in User module.
1.2-Second case in User module.
2.1-First case in Admin module
2.2-Second case in Admin module.

If Test Description/Steps/Expected Results of 2.1 is mostly same as 1.1 then we should refer 1.1 in 2.1 instead writing the sample details again.

By doing like this, we can avoid redundant details to have clear test case document.
Test Description should be short and it should uniquely represent the current test scenario without any ambiguity.
In any situation, don't use "if condition" in the Test steps. Always address only one scenario in one test case. It will be helpful to have unambiguous Expected Result.
Give some sample test data that will be useful for executing the test cases.
If the Test Case requires any Preconditions/prerequisite don't forget to mention them.
The best way is, we should arrange/order the Test Cases such that the need for specifying precondition is minimum.

For example, we need to write test case for testing user creation, user modification and user deletion.

For doing user modification and user deletion we should have already created user as precondition.

If we arrange the Test cases in below mentioned order, we can avoid the need for specifying any preconditions/prerequisites.
1-Test case for creating user.
2-Test case for verifying duplicate/existing user when adding another user with same username.
3-Test case for modifying user.
4-Test case for deleting user.
Keep Traceability Matrix to make sure that we have written test cases for covering all requirements.
Once after completing all positive scenarios, think about all possibilities of negative scenarios to have test cases which will effectively find most of the bugs.

For doing this we can refer alternate flow section of use case document, and we can think about different data, boundary conditions, different navigations paths and multi user environment.
In the test case document, we can give link to screenshots explaining the steps and/or expected results with pictures. But anyway, it is not good practice to place the screenshots within the Test Case document itself unless it is very essential
Many tools are available to capture the screenshots with user action as video. We can make use of them to keep video explaining the steps and expected results clearly in case the test case requires any complex steps. We can give link to this video from the test case document.

Tips and Tricks for doing AdHoc Testing
It is always not possible to follow proper testing such as writing Test Plan and writing Test cases.

In some cases we may need to go with adHoc Testing. because of time constraint or resource constraint.

AdHoc Testing is the part of Exploratory testing.

It is done without doing Planning and Documentation.

Adhoc testing will help to find the defects earlier. We know that earlier a defect is found the cheaper it is to fix it.

Here I am listing some tips for doing adhoc testing effectively.

In case of UI (User Interface) testing, test all navigation including Back button navigation.

Go thro' all the pages of the application to find any broken links and also make sure that each and every page is having proper links to reach other pages either directly or indirectly.
Check whether all the images are having alt attribute
See the the application screen or webpage by changing/setting different screen resolution in your computer monitor
Test the webpage in many different web browsers such as Internet Explorer, FireFox, chrome, safari, etc.
Test the tab order and default focus in all the pages
Try to enter/save test data having special characters such as single quotes, double quotes and comma
You can try to enter text with HTML tags such as < and > also in the textbox
Try to load a authenticated webpage directly by entering url in the browser without doing login
Try all the possibilities of boundary values such entering lot of data in textbox and entering negative values in numeric fields.
Remember to navigate the application from two different machines/browsers simultaneously, especially concentrate to test concurrent database saving/updating operation.
If possible/necessary, test the application in different OS (Operating System)
If your webpage uses flash files, try to see the behavior of your webpage when it is loaded in a machine which is not having flash player.
Instead of testing everything from your local machine, just try to test some screens by hosting your site in some remote machine. It will help to identify unexpected issues which may occur due to network latency.
Test Session timeout, Cookie expiry and script execution timeout.
Try to refresh your confirmation screen many times to verify whether the multiple refresh saves/inserts the data multiple times.
Test with different Date and Time format if you webpage/application has date and time entry fields. And, think about Time zone also.
Make sure that Number/Currency/Name format is correctly displayed in all pages uniformly.
When testing edit/modify/update feature, modify values of all fields and make sure that everything is getting updated correctly.
Whenever testing Delete feature make sure that all the related data also getting deleted. For example, when deleting question and answers also will be deleted.

And, make sure that necessary constraints are enforced correctly. For example, deletion of questions should not be allowed if the questions are already used in some other modules.
Best practices in Software Testing
There are lot of materials available in internet to explain best practices in Software Testing.

Here I am writing only the very essential things for medium level projects based on my experience/view point.
We should start our testing activities at beginning of Software development itself.
Understanding Scope/purpose of the project will help to judge the degree/level of testing required.
Testers should go thro' the requirements in detail without missing any points given by the client before writing test cases.
The test cases should be updated immediately once the client gives new requirement or changes the requirements.
The test case document should cover all the requirements even if some requirements are non-testable. These non-testable items should be marked as non-testable. Keeping traceability matrix document will helpful to achieve this.
The Test case document should help to clearly identify hierarchy/arrangement of test cases. It should have clear approach to arrange test cases if many test cases exist with similar steps. It is not advisable to copy & paste the similar test cases many times, instead we can specify only the additional/different steps.
Description of each test case should be written clearly after understanding the context/module of description. Steps should be written only after manually executing them. Expected results should not have any ambiguity. If required, Prerequisite/preconditions should be mentioned.
Planning and creating test plan document is essential even for small short-term projects. The test plan document need not contain all the details, but it should contain at least very basic components such as scope,schedule, risks, environments, testers
Planning of development/test/staging environments should be done clearly. And it is very important to move the code and maintain version of code in each environment without any ambiguity/confusion. Testers should know which version of code/data is available in each environment
Test execution should be done carefully based on the test cases. It is very important to use appropriate test data. It is better to create different set of test data during test case creation itself. The test data should cover valid format,invalid format and boundary values.
Test result(pass/fail) should be clearly updated for each test case. It is good practice to mention Actual behavior if the test case fails.

The test results should be communicated to the other parties (developers,business/client) daily even if all the test cases are not executed. In this case, we should add a note to indicate that the test execution is still in progress.

The test execution summary document/mail should clearly mention date of execution, environment, test name and test result.
In case, most of test cases are getting failed continuously, there is no meaning of continuing the execution. Execution should be resumed once after fixing the major issues.
It will be nice if we highlight the testing status (pass,fail,yetToStart) in appropriate color. But anyway, just highlighting the test case with appropriate color without specifying status is not a good practice. Because while taking single color printout of the test report, it is difficult to see the status from the color.
It is good practice to do some adhoc testing in addition to the test case execution.
Clear/proper communication/co-ordination within the Testing team and also with other teams (developers, client/business)is very essential.
The bug report should be prepared very clearly with all essential details, especially with the steps/testdata for reproducing the bug. The bug report should help the developers to reproduce the bug and to fix it.
Doing re-test and small regression test is essential whenever a reported bug is fixed
It is not good if we do all the testing manually, as manual testing will take more time/effort and it is difficult to manage, and also it not consistent or repeatable. So it is better to automate the test cases using test tools such as QTP(Quick Test professional). Even we can use simple shell scripts and vbscript to automate some part of the testing.

Friday, March 26, 2010

testing School---How to do System Testing

How to do System Testing
Testing the software system or software application as a whole is referred to as System Testing of the software. System testing of the application is done on complete application software to evaluate software's overall compliance with the business / functional / end-user requirements. The system testing comes under black box software testing. So, the knowledge of internal design or structure or code is not required for this type of software testing.

In system testing a software test professional aims to detect defects or bugs both within the interfaces and also within the software as a whole. However, the during integration testing of the application or software, the software test professional aims to detect the bugs / defects between the individual units that are integrated together.

During system testing, the focus is on the software design, behavior and even the believed expectations of the customer. So, we can also refer the system testing phase of software testing as investigatory testing phase of the software development life cycle.

At what stage of SDLC the System Testing comes into picture:

After the integration of all components of the software being developed, the whole software system is rigorously tested to ensure that it meets the specified business, functional & non-functional requirements. System Testing is build on the unit testing and integration testing levels. Generally, a separate and dedicated team is responsible for system testing. And, system testing is performed on stagging server.

Why system testing is required:
It is the first level of software testing where the software / application is tested as a whole.
It is done to verify and validate the technical, business, functional and non-functional requirements of the software. It also includes the verification & validation of software application architecture.
System testing is done on stagging environment that closely resembles the production environment where the final software will be deployed.
Entry Criteria for System Testing:
Unit Testing must be completed
Integration Testing must be completed
Complete software system should be developed
A software testing environment that closely resembling the production environment must be available (stagging environment).
System Testing in seven steps:
1.Creation of System Test Plan
2.Creation of system test cases
3.Selection / creation of test data for system testing
4.Software Test Automation of execution of automated test cases (if required)
5.Execution of test cases
6.Bug fixing and regression testing
7.Repeat the software test cycle (if required on multiple environments)  
Contents of a system test plan: The contents of a software system test plan may vary from organization to organization or project to project. It depends how we have created the software test strategy, project plan and master test plan of the project. However, the basic contents of a software system test plan should be:

- Scope
- Goals & Objective
- Area of focus (Critical areas)
- Deliverables
- System testing strategy
- Schedule
- Entry and exit criteria
- Suspension & resumption criteria for software testing
- Test Environment
- Assumptions
- Staffing and Training Plan
- Roles and Responsibilities
- Glossary

How to write system test cases: The system test cases are written in a similar way as we write functional test cases. However, while creating system test cases following two points needs to be kept in mind:

- System test cases must cover the use cases and scenarios
- They must validate the all types of requirements - technical, UI, functional, non-functional, performance etc.

As per Wikipedia, there are total of 24 types of testings that needs to be considered during system testing. These are:

GUI software testing, Usability testing, Performance testing, Compatibility testing, Error handling testing, Load testing, Volume testing, Stress testing, User help testing, Security testing, Scalability testing, Capacity testing, Sanity testing, Smoke testing, Exploratory testing, Ad hoc testing, Regression testing, Reliability testing, Recovery testing, Installation testing, Idem potency testing, Maintenance testing, Recovery testing and failover testing, Accessibility testing

The format of system test cases contains:
Test Case ID - a unique number
Test Suite Name
Tester - name of tester who execute of write test cases
Requirement - Requirement Id or brief description of the functionality / requirement
How to Test - Steps to follow for execution of the test case
Test Data - Input Data
Expected Result
Actual Result
Pass / Fail
Test Iteration

Thursday, March 25, 2010

Testing School---Features that to be tested in OS Commerce website

Features that to be tested in OS Commerce website
Although osCommerce is still in its development stage, the available Milestone releases are considered to be stable with the following features:
General Functionality
Compatible with all PHP 4 versions
All features enabled by default for a complete out-of-the-box solution
Object oriented backend (3.0)
Completely multilingual with English, German, and Spanish provided by default
Setup / Installation
Automatic web-browser based installation and upgrade procedure
Design / Layout
Template struture implementation to:
allow layout changes to be adaptive, easy, and quickly to make (3.0)
allow easy integration into an existing site (3.0)
Support for dynamic images
Administration / Backend Functionality
Supports unlimited products and categories
Products-to-categories structure
Categories-to-categories structure
Add/Edit/Remove categories, products, manufacturers, customers, and reviews
Support for physical (shippable) and virtual (downloadable) products
Administration area secured with a username and password defined during installation
Contact customers directly via email or newsletters
Easily backup and restore the database
Print invoices and packaging lists from the order screen
Statistics for products and customers
Multilingual support
Multicurrency support
Automatically update currency exchange rates
Select what to display, and in what order, in the product listing page
Support for static and dynamic banners with full statistics
Customer / Frontend Functionality
All orders stored in the database for fast and efficient retrieval
Customers can view their order history and order statuses
Customers can maintain their accounts
Addressbook for multiple shipping and billing addresses
Temporary shopping cart for guests and permanent shopping cart for customers
Fast and friendly quick search and advanced search features
Product reviews for an interactive shopping experience
Forseen checkout procedure
Secure transactions with SSL
Number of products in each category can be shown or hidden
Global and per-category bestseller lists
Display what other customers have ordered with the current product shown
Breadcrumb trail for easy site navigation
Product Functionality
Dynamic product attributes relationship
HTML based product descriptions
Automated display of specials
Control if out of stock products can still be shown and are available for purchase
Customers can subscribe to products to receive related emails/newsletters
Payment Functionality
Accept numerous offline payment processing (cheque, money orders, offline credit care processing, ..)
Accept numerous online payment processing (PayPal, 2CheckOut, Authorize.net, iPayment, ..)
Disable certain payment services based on a zone basis
Shipping Functionality
Weight, price, and destination based shipping modules
Real-time quotes available (UPS, USPS, FedEx, ..)
Free shipping based on amount and destination
Disable certain shipping services based on a zone basis
Tax Functionality
Flexible tax implementation on a state and country basis
Set different tax rates for different products
Charge tax on shipping on a per shipping service basis