Software testing tutorial

This is a complete software testing tutorial, which will help you to learn software testing like system testing, automation testing, regression testing, bug life cycle in testing, test case design, performance testing, QTP scripts, web application testing, etc.

Here we will discuss below software testing concepts:

  • Difference between Error, Defect and Bug
  • Bug Life Cycle in Testing
  • Top Down and Bottom Up Approach in Integration Testing
  • What is regression testing
  • What is Automation Testing?
  • What is Performance Testing?
  • What is the need of System Testing?
  • How to do Web Application Testing?
  • Testing Measurement and Metrics
  • Cost Of Defect Repair in Testing
  • What is Defect Age in software testing?
  • Test case design tips
  • QuickTest Testing Process
  • Automated Performance Testing and its Parameters
  • Defect Management and Testing Management by using Quality Center(QC)

You may like following testing tutorials:

Difference between Error, Defect and Bug

Now, I am trying to explain what is the difference between Error, Defect and Bug in software testing.

Error/Flaw
If a developer finds a mistake in coding then that mistake is called an Error/Flaw.

Defect/Issue
If a tester detected any mistake in software coding then that mistake is called Defect/Issue.

Bug
If a testers defect was accepted by developers to fix then the defect is called Bug. Sometimes customer-facing problems in software after release that is also called Bug.

Bug Life Cycle in Testing

Now let us understand what is Bug life cycle.

In the software development process, the bug has a life cycle. Befor closing a bug, it should go through the life cycle. The bug attains different states in the life cycle.

The different states of a bug can be summarized as follows:

  • New
  • Open
  • Assign
  • Test
  • Verified
  • Deferred
  • Reopened
  • Duplicate
  • Rejected and
  • Closed

Description of Various Bug Life Cycle Stages:

New:
When a bug is raised for the first time, its state will be “NEW”. It means that the bug is not yet approved.

Open:
When a tester raised a bug, and his lead approves that the bug is genuine. Then the lead changes the state as “OPEN”.

Assign:
Once the lead changes the state as “OPEN”, he assigns the bug to the corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

Test:
Once the developer fixes the bug, he has to assign the bug to the testing team for the next round of testing i.e for Re testing and Regression testing. Before he releases the software with bug fixed, he changes the state of the bug to “TEST”. It specifies that the bug has been fixed and is released to the testing team.

Deferred:
The bug, changed to deferred state means the bug is expected to be fixed in the next releases. The reasons for changing the bug to this state have many factors. Some of them are the priority of the bug may be low, lack of time for the release or the bug may not have a major effect on the software.

Rejected:
If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

Duplicate:
If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

Verified:
Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

Reopened:
If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug travells through the life cycle once again.

Closed:
Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

Bug Life Cycle in Testing

Top Down and Bottom Up Approach in Integration Testing

While doing Integration testing we are following Top-down and Bottom-up approach.

Also check out my previous posts on Performance Testing, Tips For test case design and Bug Life Cycle.

Top-Down Testing Approach

The integration of the main module with some of the stub modules is called a Top-down approach. In this case, we are using temporary programs instead of under constructive Stub modules. By doing this we can stop the control of the module which is under construction.

Top down integration testing

Bottom Up Testing Approach

The integration of sub-modules without the main module is called a Bottom-Up approach. In this case, instead of the main module, we are using a temporary program called a driver. Here we are using the driver to send the control or to connect to the next module.

Bottom up integration testing

What is regression testing

Now, we will discuss what is regression testing.

When Retesting was passed on modified software build then we can concentrate on regression testing to find side effects of modifications in the software. The impact of modifications is called “Regression”.

Regression Testing is required:

  • When there is a change in requirements and modifying the code according to the requirement
  • A new feature is added to the software
  • Defect fixing
  • Performance issue fix

For Regression testing we are following different methodologies:

  • Repeat all tests
  • Repeat related tests
  • Release software to a customer to get regression information
  • Release software as a beta version.

What is Automation Testing?

Now, let us understand what is an automation testing, Advantages of automation testing and what are the Criteria of Automation testing?

  • Any task is performed with the help of tool or software programs is called Automation.
  • To automate one application we can use Tool + Programming Language.
  • Nowadays many tools are used for automation:Ex-QTP,Selenium,WinRunner,Silk Test,RFT…..etc.

Advantages of automation testing

  • Test execution can be completed faster.
  • Testing will be consistent and efficient.
  • Reusability of automated scripts.
  • Easy of Reporting.
  • Cost of Testing will be less.

Criteria of automation testing

  • Tool support of our project.
  • Multiple releases are expected.
  • Manual will take more time.
  • The application is stable.
  • Client approval based on Project.

What is Performance Testing?

Now, let us understand what is performance testing?

In Performance testing we are checking we are concentrating on speed in processing of a software by checking below factors:

  • Execution of as software under customer expected configuration and customer expected the load to calculate speed in processing, is called as Load Testing. Here load means a number of concurrent users using our software.
  • Execution of our software under customer expected configuration and more than the customer expected the load to estimate reliability is called as Stress Testing.
  • Execution of our software under customer expected configuration and sudden increments of load to estimate reliability is called as Spike Testing.
  • Execution of our software under customer expected configuration and continuous load long time to estimate durability is called as Endurance/Durability/ Longevity Testing.

What is the need of System Testing?

From ‘V’like models, an organization follows multiple stages of development along with multiple levels testing.

Due to below issues system testing is mandatory after completion of every program testing(Unit testing) and every related two program interconnection testing(Integration testing)->

  • User Interface everything else(consists of all user-friendly facilities)
  • User interface Consistency(order of screens).
  • Timing related defects at the software level(Performance i.e Speed in Processing).
  • Interaction with background tasks at the software level(Compatibility, Interaction….etc).
  • Cope up with data volume, load and security failures.
  • Special test data conditions(Leap year checking in the date field)
  • Continuity of Functionalities at the software level (Registration to log in to mailing to log out).

How to do Web Application Testing?

Now, let us discuss web application testing, how to do web application testing and the web application process.

Web Application Testing

Testing which goes beyond the basic functional and system testing of the computer science world, to include tests for availability, performance/load, scalability, usability, compatibility, and links.

  • Optimize testing with thorough risk analysis of the site to identify and prioritize key areas and testing asks.
  • Consider interaction between HTML pages TCP/IP communications, internet connections, firewalls, and applications that run on the server side.
  • Testing ensures that reliability, accuracy, and performance of web-based applications, including web services.
  • Simulate a live application environment if required. Characteristics of Web app Errors:
  • Many types of Web App tests uncover problems evidenced on the client side using as a specific interface (e.g., may be an error symptom, not the error itself)
  • It is hard to determine whether errors are caused by problems with the server, the client, or the network itself.
  • Some errors are attributable to problems in the static operating environment and some are attributable to the dynamic operating environment.

Testing Web Apps for Errors

  • Web App content model is reviewed to uncover errors.
  • Interface model is reviewed to ensure all use cases are accommodated.
  • Design model for Web App is reviewed to uncover navigation errors.
  • User Interface is tested to uncover presentation errors and/or navigation mechanics problems.
  • Selected functional components are unit tested.
  • Navigation throughout the architecture is tested.
  • Web App is implemented in a variety of different environmental configurations and the compatibility of Web App with each is assessed.
  • Security tests are conducted.
  • Performance tests are conducted.

Web Testing Process

  • Content testing: tries to uncover content errors.
  • Interface testing: exercises interaction mechanisms and validates aesthetic aspects of user interface.
  • Navigation testing: makes use of use-cases in the design of the test cases that exercise each usage scenario against the navigation design.
  • Component testing: exercises the web app content and functional units.

Testing Measurement and Metrics

Now, we will discuss on Testing Measurement and Metrics.

Quality Assesment Measurement:

These measurements are used by the Project manager during testing to access the quality of SUT.

  • Defect Arrival Rate
  • Sufficiency(Review of remaining testing time sufficiency for remaining testing)
  • Defect Priority System
  • Important to Customers
  • Organisation Trends

Test Management Measurements:

These measurements used by test lead to estimate the effort of testers during testing.

Test Status:

  • No of testcases executed.
  • No of testcases in execution.
  • No of testcases yet to execute.

Pending defects(Quality Gap)

  • No of testcases detected-No of defects fixed.

Tester Efficiency:

  • No of testcases executed per day.
  • No of defects detected per day.
  • No of testcases prepared per day.

Tester Capability improvement measurements:

These measurements used by test enginees to improve their testing skills
Defect Removal efficiency(DRE)=A/A+B
here A=No of deffects detected by tester during testing.
B=No of defects faced by Customers

Test Effectiveness:

  • No. of test cases prepared per day in previous projects and no. of test cases prepared per day for recent projects.
  • No. of test cases executed per day in previous project and no. of test cases executed per day in recent project.

Cost Of Defect Repair in Testing

Now, let us discuss Cost Of Defect Repair in Testing.

Cost Of Defect Repair:
During software projects, you can hear widely different attitudes towards fixing defects, depending upon priorities and motivations:

“We will fix that when we have time. In the meantime, just keep developing! How can you possibly tell how much it will cost to fix later: – An engineer manager, eager to continue software development.

“Let’s find all the defects before system test. The customer will wait for the product.”

An Engineering manager, concerned that shipping a product with defects would prevent customers from buying the product.

The choice to fix or not depends upon many factors: the type of product; the risks associated with shipping known or unknown defects; your development processes; and the cost of fixing the defect.

The easiest time to calculate this cost is during the system test time when people are 100% dedicated to finding defects. To begin, count the number of fixes made.

You know how many people were involved, the cost per person-day, and the duration of the system test. The total can be surprisingly high, which is why the fixed value is so important.

The average cost to fix a defect=((Number of people * Number of days)/(Number of fixed defects))*cost per person day

What is Defect Age in software testing?

Now, we will discuss defect age in software testing.

Defect Age can be measured in two ways in the world of Software testing:

  • Time
  • Phases

Defect Age (In terms of TIME):

  • Defect Age is the difference between the date a defect is detected and the current date (incase the defect is still open) or the date the defect was fixed (incase the defect is already fixed).
  • The ‘defects’ are confirmed and assigned (not just reported).
  • Dropped defects are not counted.
  • The difference in time can be calculated in hours or in days.
  • ‘Fixed defect’ means the defect is verified and closed. Not only just ‘fixed’ by the developer.
  • Defect Age in Time = Defect Fix Date (OR Current Date) – Defect Detection Date
  • Normally, the average age of all defects is calculated.
  • Example: If a defect was detected on 01/01/2013 10:00:00 AM and closed on 01/04/2013 12:00:00 PM, the Defect Age is 74 hours.

Defect Age (In terms of PHASES):

  • Defect Age is the difference in phases between the defect injection phase and the defect detection phase.
  • ‘Defect injection phase’ is the phase in the software life cycle where the defect was introduced.
  • ‘Defect detection phase’ is the phase in the software life cycle where the defect was identified.
  • Defect Age in Phase = Defect Detection Phase – Defect Injection Phase
  • Example: Consider that the software life cycle has the following phases:
  • Requirements Gathering
  • High-Level Design
  • Low level Design
  • Coding
  • Unit Testing
  • Integration Testing
  • System Testing
  • Acceptance Testing

If a defect is identified in System Testing and the defect was introduced in Requirements Development, the Defect Age is 6.

Test case design tips

Let us discuss some tips on Test case design.

You can also check my previous articles on Tutorial on regression testing, Bug Life Cycle and What is Automation Testing?

  • A test case must be designed to be reusable i.e.It should be executed on future versions of software also.
  • A test case must consist of constant results to different testers while executing on software under testing.
  • A test case name must start with Verify/Validate/Check.
  • A test case must be specific to the testing object(field)or operation in software under testing.
  • A test case must be divided into positive or negative.
  • A test case must be divided into sub test cases to keep them as short.
  • A test case contains 10 to 15 test steps procedure as maxium.
  • The test case must specify defect id when the test case became failed.
  • A test case must be reviewed by test lead along with BA/SA to ensure the coverage of functionality in that test case.
  • A test case must specify what that tester wrote and what tester has to perform and the response of software under testing.
  • A test case must be approved by customer site representatives or PM.

QuickTest Testing Process

The QuickTest testing process consists of seven main phases:

  1. Preparing to record:

Before you record a test, confirm that your application and Quick Test are set to match the needs of your test.

Make sure your application displays elements on which you want to record, such as a toolbar or a special window pane and that your application options are set as you expect to the purpose of your test.

  1. Recording a session on your application:

As you navigate through your application or website, Quick Test graphically displays each step you perform as a row in the Keyword view. A step is any user action that causes or makes a change in your application, such as clicking a link or image or entering data in a form.

  1. Enhancing your Test:

Inserting checkpoints in your test lets you search for a specific value of a page, object, or text string, which helps you determine whether your application or site is functioning correctly.

Broadening the scope of your test, by replacing fixed values with parameters, lets you check how your application performs the same operations with multiple sets of data.

  1. Debugging your Test:

You debug a test to ensure that it operates smoothly and without interruption.

  1. Running your Test:

You run a test to check the behavior of your application or website. While running, Quick Test opens the application, or connects to the website, and performs each step in your test.

  1. Analyzing the results:

You examine the test results to pinpoint in your application.

  1. Reporting defects:

If you have Quality Center installed, you can report the defects you discover to a database. Quality Center is the Mercury test management solution.

Automated Performance Testing and its Parameters

Now, we will discuss Automated Performance Testing and its Parameters.

Automated Performance Testing is a discipline that leverages products, people, and process to reduce the risks of application, upgrade or patch deployment.

As its core, automated performance testing is about applying production workloads to pre-deployment systems while simultaneously measuring system performance and end-user experience.

Parameters to be considered in Performance Testing:

Below are the parameters considered during Performance testing:

  • Response Time (Resp Time)
  • Execution Time(Exec Time)
  • Throughput
  • Server Resource Utilization
  • Network Performance

Response Time:
The time is taken from request to the time when we get the first bit of response.

Execution Time:
The time from the request to the time when the complete response is received.

Through Put:
Amount of work is done/sec i.e. data processed per second or no of requests addressed per second.

Server Resource Utilization:
How the memory and processor resources are used.

Network Performance:

The delays in the network are monitored.

Factors Affecting Performance of the application:

Performance of any application will be influenced by below factors:

  • Multi-User (Concurrent Users..working at the same time)
  • Data Volume (Number of records in DB)
  • Functionality/Complexity of Activity

Defect Management and Testing Management by using Quality Center(QC)

Now, we will discuss on defect management and testing management by using Quality Center (QC).

Quality Center is a web based system for software quality testing across a wide range of IT and application environments. It is designed to optimize and automate key quality activities, including requirements, test and defects management, functional testing and business process testing.

It includes industry-leading products such as Mercury Test Director, QTP, Win Runner, and the new Mercury Business Process Testing.

Four Primary functions of Quality Center:

  • Capturing Business requirements.
  • Building test cases and test plans.
  • Creating test sets and test results in Test Lab.
  • Tracking and managing defects.
  1. Capturing Business requirements:
  • Group requirements by business function.
  • Link requirements to test cases and defects.
  • Import/Export facility from/to MS Word and/or MS Excel.
  • View defects associated with a requirement.
  1. Building test cases and test plans:
  • Define test parameters (Add/Delete/Modify test steps to confirm with business rules.)
  • Validate test coverage for all requirements.
  • Record expected and actual results for each test step.
  • Provide a centralized repository that can store all automated tests.
  • Integrate with Win Runner/QTP for automated testing.
  1. Creating test sets and test results in Test Lab:
  • Group test scripts to achieve testing goals (module functionality).
  • Schedule tests.
  • Record Pass/fail results for each test step.
  1. Tracking and managing defects:
  • Add and Track defects.
  • Follow up with the defect management life cycle.
  • Links with Email systems for defect notification.
  • Save graphs in MS Word or MS Excel.
  • Generate reports for MS Word documents.

Defect Report Template (IEEE 829)

While test execution if any test case fails, then we are reporting those bugs in below format through defect tracking team in manual testing.

Defect Id:-Unique number or Name.

Defect description:-Summary of the detected defect.

Build Version Id:-Version number of current build. (Tester find that defect in that build version)

feature/Module:-Name of the module(Tester detected this defect in that module)

test case Id:-Id of the failed test case. (Tester detected this defect while executing that test case)

Reproducible:-Yes/No

  • Yes:-If defect appears every time in test repetition.
  • No: -If a defect appears rarely in test execution.

If Yes, attach a corresponding failed test case.

If No attach corresponding failed test case and screenshots.


Severity:-The seriousness of defect with respect to functionality

  • High(Show Stopper):-Without fixing this defect tester not able to continue testing.
  • Medium: -Able to continue but mandatory to fix.
  • Low: -Able to continue testing but may or may not to fix.

Priority:-The importance of defect fixing with respect to customer interest. (High, Medium, Low)

Test Environment:-Used h/w and s/w while detecting this defect.

  • Status:-New/Reopen
  • New:-Reporting first time.
  • Reopen:-Re-reporting.

Reporting By:-Name of the Test Engineer/Test executor.

Reporting On:-Date and Time.

Send To:-Mail id of Defect Tracking Team(DTT)

Reviewed By:-Test Lead

Suggested Fix(Optional):-Tester suggestion to fix this defect.

Here in this testing tutorial, we learned the below things:

  • Difference between Error, Defect and Bug
  • Bug Life Cycle in Testing
  • Top Down and Bottom Up Approach in Integration Testing
  • What is regression testing
  • What is Automation Testing?
  • What is Performance Testing?
  • What is the need of System Testing?
  • How to do Web Application Testing?
  • Testing Measurement and Metrics
  • Cost Of Defect Repair in Testing
  • What is Defect Age in software testing?
  • Test case design tips
  • QuickTest Testing Process
  • Automated Performance Testing and its Parameters
  • Defect Management and Testing Management by using Quality Center(QC)
Donwload Hub site pdf

Download SharePoint Online Tutorial PDF FREE!

Get update on Webinars, video tutorials, training courses etc.

>