1. Unit Testing
Unit Testing is a software testing method in which individual components or functions of a program are tested in isolation. It is typically performed by developers to ensure that specific modules behave correctly as per requirements.
Unit tests are written using frameworks like JUnit, NUnit, or TestNG. The goal is to catch bugs early and ensure code correctness at a granular level.
- Example: Testing a function that calculates tax by feeding fixed inputs and verifying the output.
2. Integration Testing
Integration Testing checks the interaction between different software modules after they have been unit tested. This testing ensures that modules work together as expected when combined.
It uncovers issues such as incorrect data exchange, logic mismatch, or integration errors between modules. Techniques include Top-Down, Bottom-Up, and Big Bang.
- Example: Testing the integration of the login module with the user database and session system.
3. System Testing
System Testing validates the complete and integrated software system against specified requirements. It simulates real-world use and is typically conducted by the QA team in an environment similar to production.
This is a high-level testing phase where functionality, performance, security, and other aspects are checked together.
- Example: Testing the entire workflow of an e-commerce site from login to checkout.
4. Smoke Testing
Smoke Testing is a preliminary test performed to check the basic functionality of an application. It acts like a "sanity check" to determine whether a new build is stable enough for deeper testing.
This testing is shallow and wide, focusing on critical features only.
- Example: Verifying that login, search, and cart features work in a new build.
5. Sanity Testing
Sanity Testing is a quick test focused on verifying specific functionalities after minor code changes or bug fixes. It ensures that recent changes have not affected the intended area.
It is usually narrow and deep in scope, unlike smoke testing.
- Example: Checking only the checkout module after fixing a payment issue.
6. Regression Testing
Regression Testing involves re-running previously passed test cases to confirm that recent changes haven't introduced new bugs or broken existing functionality.
This is critical in agile development where frequent changes are common.
- Example: Re-testing user registration after adding new form validations.
7. Functional Testing
Functional Testing ensures the application behaves according to the defined functional requirements. It validates that each function of the software performs its task correctly.
This testing is typically black-box based and focuses on user interactions and outputs.
- Example: Verifying if the search bar returns accurate results when a user types a query.
8. Non-Functional Testing
Non-Functional Testing evaluates attributes such as performance, usability, reliability, and scalability. It ensures the software's quality under various conditions beyond functional correctness.
This helps improve user experience and system efficiency.
- Example: Measuring how fast the system responds under a load of 500 users.
9. Usability Testing
Usability Testing measures how user-friendly, efficient, and pleasant the software is for real users. It focuses on improving user experience and identifying navigational or design issues.
Feedback is gathered from users through observation and interviews.
- Example: Watching users attempt to place an order and identifying where they get stuck.
10. Acceptance Testing
Acceptance Testing is performed by end users or clients to validate that the system meets their expectations and business needs. It usually occurs before the final release of the product.
It confirms that the delivered system is ready for production use.
- Example: A client using the invoicing module to ensure all financial reports generate correctly.
11. Alpha Testing
Alpha Testing is a type of acceptance testing performed by internal employees or testers before releasing the product to actual users. It helps catch bugs early and collect internal feedback.
This is done in a controlled development environment.
- Example: Developers test a mobile app in-house for usability and stability.
12. Beta Testing
Beta Testing involves releasing the software to a limited number of external users for real-world feedback. It helps identify unexpected bugs and collect usability insights before a full launch.
Feedback from beta testers often leads to final improvements.
- Example: A company releasing an early version of its new app to 200 selected users.
13. Exploratory Testing
Exploratory Testing is an informal testing approach where testers explore the application without predefined scripts. Testers use their experience, intuition, and creativity to find bugs.
This technique is useful for quickly discovering unexpected issues.
- Example: Navigating through an app randomly to identify UI glitches or crashes.
14. Ad-Hoc Testing
Ad-Hoc Testing is an unstructured and informal method of testing where the objective is to find defects by randomly using the application. It does not follow any test plan or documentation.
This is often done when there is limited time or documentation available.
- Example: Typing random characters into a form field to test for input handling issues.
15. Performance Testing
Performance Testing evaluates how well an application performs under expected workloads. It checks speed, responsiveness, and stability during peak usage.
This ensures the system can handle the expected traffic and data volume.
- Example: Simulating 1,000 users logging in simultaneously to check response time.
16. Load Testing
Load Testing is a type of performance testing where the system is subjected to a specific load to determine its behavior under normal and peak conditions.
The goal is to identify bottlenecks and ensure consistent performance.
- Example: Applying a load of 500 orders/hour to an e-commerce checkout system.
17. Stress Testing
Stress Testing evaluates the system’s ability to maintain functionality under extreme or beyond-limit conditions. It helps determine the system's breaking point.
This ensures the system fails gracefully under heavy load.
- Example: Flooding the server with 10,000 login requests in 2 minutes.
18. Volume Testing
Volume Testing, also known as flood testing, checks how the system handles a large volume of data. It ensures the application performs well when handling vast datasets.
This is especially important for database-heavy applications.
- Example: Importing 1 million product entries into the system and observing performance.
19. Security Testing
Security Testing ensures that the application is protected from external threats and data breaches. It validates authentication, authorization, encryption, and session management.
This protects sensitive user data and avoids compliance issues.
- Example: Testing for SQL injection vulnerabilities in login forms.
20. Compatibility Testing
Compatibility Testing checks whether the application works across different browsers, devices, operating systems, and networks. This ensures a consistent experience for all users.
It identifies rendering issues, layout bugs, or platform-specific errors.
- Example: Testing an app on Chrome, Safari, and Firefox across iOS and Android devices.
21. Scalability Testing
Scalability Testing evaluates the system's ability to handle growth—in users, transactions, or data volume—without performance degradation. It is essential for systems expected to grow over time.
This testing helps identify how much the system can scale before upgrades or redesigns are necessary.
- Example: Testing how performance changes as the user base increases from 1,000 to 10,000.
22. Recovery Testing
Recovery Testing verifies how well the system recovers after unexpected failures like crashes, network failures, or hardware malfunctions. It ensures the application can return to a stable state without data loss.
This testing is crucial for systems requiring high availability and reliability.
- Example: Forcefully shutting down the system during data processing and checking if recovery occurs automatically.
23. Maintainability Testing
Maintainability Testing assesses how easily a system can be modified to fix defects, add new features, or improve performance. It also evaluates code clarity and modularity.
Maintainable systems reduce long-term development costs and simplify future updates.
- Example: Checking how easily a developer can identify and fix a bug in the codebase.
24. Accessibility Testing
Accessibility Testing ensures the application is usable by people with disabilities, including vision, hearing, and motor impairments. It often involves screen readers, keyboard navigation, and color contrast checks.
This type of testing supports legal compliance and inclusivity.
- Example: Verifying that screen reader software can read out all menu items and labels.
25. Usability Testing
Usability Testing is conducted to evaluate how intuitive and user-friendly the application is. Testers observe real users interacting with the product to identify confusion or inefficiencies.
The aim is to refine the user interface and improve user satisfaction.
- Example: Observing users as they complete a sign-up process and identifying where they get stuck.
26. Compliance Testing
Compliance Testing ensures the application adheres to external regulations, standards, or legal requirements. It is often required in domains like healthcare, finance, and telecommunications.
This protects organizations from fines and legal consequences.
- Example: Verifying that a healthcare app complies with HIPAA regulations.
27. Localization Testing
Localization Testing checks whether the application functions and displays correctly in a specific language and cultural context. This includes formatting of dates, currencies, and translations.
It ensures a seamless experience for users in different regions.
- Example: Checking if the German version of a shopping site correctly uses the € symbol and local spelling.
28. Internationalization Testing
Internationalization Testing ensures that an application is built to support multiple languages and locales without code changes. It validates infrastructure readiness for translation and regional formats.
This is a prerequisite to successful localization.
- Example: Testing that a UI layout adjusts properly when switching from English to Arabic.
29. Installation Testing
Installation Testing checks the installation, upgrade, and uninstallation processes of a software application. It ensures that the setup process is smooth, reliable, and error-free.
This helps users get started without technical issues.
- Example: Verifying that a desktop application installs correctly on both Windows and macOS platforms.
30. Configuration Testing
Configuration Testing evaluates how the application behaves with different combinations of hardware, software, networks, or environments. It ensures the app works in a variety of setups.
This is essential for cross-platform or enterprise-grade applications.
- Example: Running tests on Windows 10 with Chrome, Firefox, and Edge browsers.
31. Mobile Testing
Mobile Testing focuses on validating software applications designed for mobile devices like smartphones and tablets. It checks for usability, responsiveness, performance, and compatibility across devices.
Mobile apps require special attention due to diverse OS versions, screen sizes, and hardware.
- Example: Testing a mobile banking app on Android 11 and iOS 16 devices.
32. Cross-Browser Testing
Cross-Browser Testing ensures that web applications display and function consistently across multiple browsers. Different browsers may interpret HTML, CSS, and JavaScript differently.
This testing helps deliver a uniform user experience.
- Example: Checking layout alignment on Chrome, Safari, Firefox, and Edge browsers.
33. Data-Driven Testing
Data-Driven Testing is an automated testing method where test scripts are executed with multiple sets of input data. It separates test logic from test data for efficiency and reusability.
This approach helps achieve broad test coverage with fewer scripts.
- Example: Verifying a login feature with 100 sets of usernames and passwords from an Excel sheet.
34. Keyword-Driven Testing
Keyword-Driven Testing uses high-level keywords to represent test steps in a spreadsheet or external file. These keywords are mapped to code functions in the automation framework.
This method allows non-programmers to create tests without writing code.
- Example: Using a keyword like “ClickButton” in an Excel row to trigger a button click in Selenium.
35. Manual Testing
Manual Testing involves testers executing test cases manually without using automation tools. It is essential for exploratory, usability, and ad-hoc testing where human intuition is key.
Though time-consuming, manual testing remains important for UI/UX evaluations and complex scenarios.
- Example: Manually checking how a user navigates through an online booking system.
36. Automated Testing
Automated Testing uses scripts and tools to execute test cases automatically. It accelerates regression, functional, and performance testing and reduces human error.
Popular tools include Selenium, Playwright, Cypress, and JUnit.
- Example: Running a nightly suite of Selenium tests on a CI/CD server.
37. Continuous Testing
Continuous Testing is the practice of running automated tests throughout the software development lifecycle, particularly within CI/CD pipelines. It ensures that defects are caught early and software is always releasable.
This supports agile and DevOps practices by enabling fast feedback loops.
- Example: Running tests automatically every time new code is pushed to GitHub.
38. Headless Testing
Headless Testing runs browser-based tests without opening a visible UI. It is useful for speeding up execution in CI environments and saving system resources.
Headless browsers like Chrome Headless or Playwright Headless are commonly used.
- Example: Using Chrome in headless mode to run login tests on a Linux server.
39. API Testing
API Testing validates the functionality, reliability, performance, and security of Application Programming Interfaces (APIs). It involves sending requests to endpoints and verifying responses.
Tools like Postman, REST Assured, and SoapUI are commonly used.
- Example: Sending a POST request to the user creation API and verifying a 201 Created response.
40. UI Testing
UI Testing, or User Interface Testing, checks the visual elements of an application such as buttons, menus, and layouts. It ensures they appear and function as intended across devices and browsers.
This testing focuses on visual consistency, alignment, and responsiveness.
- Example: Verifying that the login button remains visible and clickable on all screen sizes.
41. Code Coverage Testing
Code Coverage Testing measures how much of the source code is exercised by the test suite. It helps identify untested parts of the application and improve test completeness.
Metrics include line coverage, branch coverage, and function coverage.
- Example: Using a tool like JaCoCo to measure what percentage of methods in a Java class are tested.
42. Static Testing
Static Testing involves examining the software's code or documentation without executing the program. It aims to detect errors early in the development process using reviews, walkthroughs, and inspections.
This type of testing helps catch issues like syntax errors, coding standard violations, and poor design.
- Example: Reviewing source code to ensure that variables are declared before use.
43. Dynamic Testing
Dynamic Testing evaluates software by executing code to validate functionality and behavior. It detects issues like logic errors, integration faults, and unexpected output during runtime.
This is the most common form of testing and includes unit, system, and acceptance testing.
- Example: Executing test cases against a login page to verify user authentication flow.
44. Boundary Value Testing
Boundary Value Testing focuses on checking values at the edge of input ranges, where defects often occur. It helps uncover off-by-one errors and input handling issues.
This technique is especially useful in numeric or range-based input fields.
- Example: If an input accepts ages 18–60, tests would include 17, 18, 60, and 61.
45. Equivalence Partitioning
Equivalence Partitioning divides input data into valid and invalid partitions. Test cases are derived from each group to reduce the total number of tests while maintaining coverage.
This method avoids redundant test cases by grouping similar inputs.
- Example: For a form that accepts ages 18–60, valid group = 30, invalid groups = 10, 70.
46. Decision Table Testing
Decision Table Testing is a technique used to test different input combinations and their corresponding outputs. It is especially useful in applications with complex business logic or rules.
Each column in the table represents a unique test scenario based on conditions and actions.
- Example: Testing a loan approval system where decisions depend on income and credit score values.
47. State Transition Testing
State Transition Testing checks how the system transitions from one state to another based on inputs. It is ideal for systems where behavior changes depending on current conditions or events.
This approach uses state diagrams or tables to design test cases.
- Example: Testing ATM operations like Insert Card → Enter PIN → Withdraw Cash → Eject Card.
48. Use Case Testing
Use Case Testing involves designing test cases based on user interactions or workflows. It helps validate real-life user scenarios from start to finish.
This technique ensures that the application supports user goals as intended.
- Example: A user creating an account, adding items to a cart, and completing a purchase.
49. Error Guessing
Error Guessing is a technique where testers use experience and intuition to identify problematic areas in the application. It does not rely on formal test design techniques.
This method is often used after formal testing to uncover hidden defects.
- Example: Trying to submit a form without entering required fields to see if it throws an error.
50. Pairwise Testing
Pairwise Testing is a combinatorial testing technique where all possible discrete combinations of pairs of input parameters are tested. It helps achieve maximum coverage with minimal test cases.
This method is ideal for reducing the number of tests in complex input scenarios.
- Example: Testing three fields (OS, browser, language) by ensuring every pair (e.g., Chrome + English) is covered at least once.
51. Orthogonal Array Testing
Orthogonal Array Testing is a statistical method used to generate test cases with maximum coverage using minimal combinations. It is useful for complex systems with multiple parameters.
This structured approach improves test efficiency and defect detection rate.
- Example: Testing multiple car models with various fuel types and transmissions using an orthogonal matrix.
52. Model-Based Testing
Model-Based Testing creates test cases from models that represent the desired behavior of the system. These models may include state machines, flowcharts, or activity diagrams.
It ensures that tests reflect both structure and dynamic interactions of the application.
- Example: Generating tests from a login workflow modeled using a state diagram.
53. Cause-Effect Graphing
Cause-Effect Graphing is a technique that maps causes (input conditions) to effects (output actions). The graph is then converted into a decision table for test case generation.
It is especially helpful for testing complex logic with multiple inputs and conditions.
- Example: Testing insurance premium calculations based on age, location, and vehicle type.
54. Mutation Testing
Mutation Testing checks the effectiveness of existing test cases by making small changes (mutations) to the code. If tests fail for the mutated code, they are considered strong. If they pass, test gaps exist.
It helps assess the quality and robustness of test suites.
- Example: Changing a `>` to `>=` in a condition and checking if tests detect the error.
55. Error Seeding
Error Seeding involves deliberately adding known bugs to the software to evaluate the efficiency of the testing process. The number of seeded bugs found is used to estimate how many real bugs may remain.
This method provides insights into test coverage and team effectiveness.
- Example: Adding three known defects and checking if testers identify them during manual testing.
56. Visual Testing
Visual Testing ensures that the graphical interface of an application appears correctly and consistently. It compares UI screenshots pixel-by-pixel to detect misalignment or rendering issues.
This technique is widely used in responsive and cross-browser design validation.
- Example: Using tools like Percy or Applitools to detect layout shifts between versions.
57. Snapshot Testing
Snapshot Testing captures the output of components (especially UI) and compares them against a stored snapshot. If the current output differs, the test fails, indicating changes in appearance or behavior.
It is commonly used in frontend frameworks like React.
- Example: Verifying that a React component's UI hasn't changed unexpectedly after a code change.
58. Assertion Testing
Assertion Testing validates outcomes by using assertions—statements that check whether a condition is true. Assertions are used throughout test scripts to confirm expected vs. actual behavior.
This is a core element of automated testing frameworks like JUnit, TestNG, or PyTest.
- Example: Asserting that the result of a search function returns 10 items.
59. Mocking
Mocking involves simulating components or external systems that are not yet available or are difficult to use during testing. It enables testers to isolate the unit under test.
Mocks are essential in unit and integration testing to mimic dependencies.
- Example: Mocking a payment gateway while testing the checkout process.
60. Stubbing
Stubbing is a technique where simplified implementations of functions or modules are used in place of the real ones. It helps isolate test logic and control the behavior of external dependencies.
Stubs often return hardcoded values to simulate specific conditions.
- Example: Using a stubbed API response to test how the UI handles successful data fetch.
61. Driver Script
A driver script is a central control file in automated testing that calls various test scripts and components. It manages the flow of execution and handles input, output, and logging.
This is especially useful in data-driven or modular frameworks.
- Example: A Selenium driver script that reads input from Excel and calls test modules accordingly.
62. Test Harness
A test harness is a collection of test data, scripts, and utilities used to automate the testing process. It provides a consistent testing environment and reports results for validation.
It often includes drivers and stubs to simulate missing components.
- Example: A custom framework that simulates API responses to test frontend functionality.
63. Test Data
Test data consists of input values used during test execution to verify software behavior. It can be static, generated, or fetched from external files, and is crucial for meaningful test cases.
Managing test data effectively improves test accuracy and repeatability.
- Example: Using multiple user credentials to test different roles in a login system.
64. Bug Report
A bug report documents an issue found during testing, including steps to reproduce, expected and actual results, severity, and screenshots. Clear bug reports accelerate the debugging process.
They are typically created in tools like Jira, Bugzilla, or Azure DevOps.
- Example: A bug report detailing that the "Submit" button doesn’t trigger any action in Safari browser.
65. Defect Life Cycle
The defect life cycle describes the stages a bug goes through—from identification to closure. Common stages include New, Assigned, In Progress, Fixed, Retested, and Closed.
Managing this cycle ensures traceability and accountability of defects.
- Example: A bug goes from “New” to “Fixed” after the developer resolves the issue, then “Closed” after retesting.
66. Test Plan
A test plan is a formal document outlining the strategy, scope, resources, schedule, and deliverables for the testing process. It serves as a blueprint for how testing will be conducted.
It ensures clarity among teams and aligns testing with project goals.
- Example: A test plan that includes testing timelines, tools, responsibilities, and risk mitigation plans.
67. Test Case
A test case defines a specific input, execution condition, and expected result used to verify a particular software feature. It includes steps to follow, data to input, and outcomes to validate.
Well-written test cases improve test coverage and reproducibility.
- Example: A test case for login: Enter valid username and password → Click Login → Verify homepage loads.
68. Test Scenario
A test scenario is a high-level description of what needs to be tested. It often represents a user story or use case and may encompass multiple test cases.
Scenarios help testers focus on the goal of the feature rather than technical steps.
- Example: “Verify user can place an order” may include multiple cases like login, cart, payment, and confirmation.
69. Test Strategy
A test strategy outlines the overall approach to testing, including test types, environments, tools, and automation plans. It is often a part of the master test plan and guides decision-making.
This ensures consistency across different teams and releases.
- Example: A strategy that defines the use of Selenium for regression and Postman for API testing.
70. Test Summary Report
The test summary report is a document generated at the end of a testing cycle. It summarizes the testing activities, results, metrics, issues, and recommendations.
This report helps stakeholders evaluate product readiness.
- Example: A report showing 200 test cases passed, 5 failed, and 3 critical bugs open in the current release.
71. Severity
Severity defines the impact of a defect on the application’s functionality or system operation. It is assigned by the tester and ranges from critical to low based on how badly the feature is broken.
Severity guides prioritization and defect triaging.
- Example: A crash in the payment gateway is a high-severity bug.
72. Priority
Priority indicates how urgently a defect should be fixed. It is typically assigned by the project or product manager based on release schedules, business needs, and impact.
Priority may differ from severity in some cases.
- Example: A minor typo on the homepage may have low severity but high priority before launch.
73. Test Log
A test log is a detailed record of test execution, capturing actions performed, environment details, results, and any anomalies encountered. Logs are vital for debugging and audit purposes.
Automation tools often generate logs automatically.
- Example: A Selenium log showing element not found error during UI verification.
74. Traceability Matrix
The traceability matrix maps requirements to their corresponding test cases to ensure all features are covered. It ensures that no requirement is missed during testing.
This matrix is also useful for impact analysis and audit trails.
- Example: Mapping Requirement #15 (User Profile Update) to Test Cases TC101–TC105.
75. Build Verification Testing (BVT)
BVT, also known as a build acceptance test or smoke test, is a subset of tests run on each new build to ensure that it is stable enough for further testing.
It provides quick feedback and avoids wasting time on broken builds.
- Example: Automatically checking login, dashboard, and logout on every CI pipeline build.
76. Environment Setup
Environment setup involves configuring hardware, software, network, and tools required to test an application. This includes operating systems, browsers, databases, and testing frameworks.
Proper setup reduces flaky test results and execution issues.
- Example: Installing JDK, TestNG, MySQL, and a test management tool before executing tests.
77. Test Environment
The test environment is a replica of the production or staging system where testing is performed. It includes the application, servers, test data, configurations, and third-party integrations.
Stable environments reduce false positives and increase test reliability.
- Example: Running tests on a UAT server that mirrors the production setup with masked data.
78. CI/CD
CI/CD (Continuous Integration/Continuous Deployment) is a DevOps practice that automates code integration, testing, and delivery. It speeds up the release process and reduces manual errors.
Testing is a key component of the CI/CD pipeline to ensure release readiness.
- Example: Using Jenkins to run test suites and deploy successful builds to staging automatically.
79. Code Review
Code review is the manual examination of code by developers to detect defects, improve quality, and share knowledge. It often occurs before testing and reduces the number of bugs caught later.
Tools like GitHub and Bitbucket simplify review processes with pull requests.
- Example: A senior developer reviews changes to the payment module and suggests optimizations before merging.
80. Shift Left Testing
Shift Left Testing refers to moving testing activities earlier in the software development lifecycle. It encourages early validation, continuous integration, and collaboration between developers and testers.
This approach helps reduce defects and shortens feedback loops.
- Example: Writing unit tests and reviewing requirements during the planning phase of development.
81. Shift Right Testing
Shift Right Testing involves testing in the production environment or post-deployment phase. It emphasizes real-world usage, monitoring, and user feedback to uncover issues missed earlier.
This approach is useful for performance validation and A/B testing.
- Example: Monitoring real-time user behavior to test a newly released feature.
82. Chaos Testing
Chaos Testing involves deliberately injecting failures into a system to test its resilience and recovery capabilities. It helps identify weaknesses in distributed systems and cloud infrastructure.
This technique is often used in DevOps and site reliability engineering (SRE).
- Example: Randomly terminating microservices to ensure the system continues to function gracefully.
83. A/B Testing
A/B Testing is a technique to compare two versions of a webpage or feature to determine which performs better based on user behavior. It’s widely used in UI/UX optimization.
It relies on statistical analysis of real user interactions.
- Example: Showing half of users a blue “Buy Now” button and the other half a red one to see which converts better.
84. Canary Testing
Canary Testing is the practice of releasing a new version of software to a small subset of users before rolling it out to the entire user base. It minimizes risk by allowing early issue detection.
It's commonly used in continuous delivery pipelines.
- Example: Deploying a new search algorithm to 5% of users for feedback before full release.
85. Smoke Testing
Smoke Testing, also known as “build verification testing,” is a quick set of tests to check basic functionality of an application. It ensures critical paths work and the build is stable enough for deeper testing.
This testing is performed after each new build.
- Example: Verifying login, dashboard access, and logout features immediately after deployment.
86. Sanity Testing
Sanity Testing is a brief run-through to verify that a specific feature or bug fix is working as expected. It is typically performed after receiving a new software build with minor changes.
It is narrower and more focused than smoke testing.
- Example: Checking if the “Forgot Password” feature works after a quick patch was applied.
87. Exploratory Testing
Exploratory Testing involves simultaneous learning, test design, and execution without predefined test cases. It relies on the tester’s intuition and knowledge to uncover unexpected issues.
This type of testing is flexible and often uncovers edge cases.
- Example: Navigating through an app randomly to observe unexpected behaviors.
88. Ad-hoc Testing
Ad-hoc Testing is informal testing without planning or documentation. It focuses on breaking the system using random scenarios, often guided by tester creativity and past experience.
It helps find bugs that structured testing might miss.
- Example: Trying to upload a corrupted image file and seeing how the system reacts.
89. Benchmark Testing
Benchmark Testing evaluates an application’s performance against industry standards or competitors. It establishes a baseline to measure future improvements or regressions.
Common metrics include throughput, response time, and CPU usage.
- Example: Comparing the response time of a banking portal with that of other leading banks.
90. Certification Testing
Certification Testing ensures that a software product complies with industry standards and receives formal approval. It is common in domains like telecommunications, healthcare, and finance.
This testing is usually done by third-party organizations.
- Example: A point-of-sale (POS) system undergoing PCI DSS certification testing.
91. Regression Testing
Regression Testing verifies that recent code changes have not adversely affected existing functionality. It is crucial for ensuring long-term stability of software as it evolves.
Automated regression suites are ideal for frequent updates.
- Example: Running a full test suite after a new payment feature is added.
92. Retesting
Retesting is done to verify that specific defects have been fixed. Unlike regression testing, which checks surrounding areas, retesting focuses only on the failed test cases from previous runs.
This ensures the reported issue is resolved correctly.
- Example: After fixing a login issue, retesting that login works with correct credentials.
93. Compatibility Testing
Compatibility Testing checks whether an application works as expected across different browsers, devices, networks, and operating systems. It ensures a consistent experience for all users.
This is critical for web and mobile apps.
- Example: Ensuring a responsive design looks the same on both Android and iOS browsers.
94. Latency Testing
Latency Testing measures the time delay between a user action and the system’s response. It helps evaluate application responsiveness, especially in real-time systems like video conferencing apps.
This is often part of performance and load testing.
- Example: Measuring how fast a chat message appears after hitting “send.”
95. Load Testing
Load Testing determines how a system behaves under expected user loads. It ensures the application can perform reliably when multiple users access it simultaneously.
This helps identify bottlenecks before production use.
- Example: Simulating 1,000 users placing orders at the same time on an e-commerce site.
96. Spike Testing
Spike Testing examines the system’s performance when sudden and extreme changes in load occur. It checks how well the system handles traffic surges or drop-offs.
This helps prepare for scenarios like flash sales or viral traffic spikes.
- Example: Simulating a spike from 100 to 10,000 users in under 5 minutes.
97. Soak Testing
Soak Testing, or endurance testing, checks how the system performs under sustained load over a long period. It identifies issues like memory leaks, database saturation, or performance degradation.
This test helps ensure long-term stability and reliability.
- Example: Running a video streaming service under continuous load for 24 hours.
98. Volume Testing
Volume Testing evaluates how the system handles large volumes of data, rather than user load. It tests the application's ability to process, store, and retrieve massive datasets efficiently.
It helps uncover issues like slow queries and data loss.
- Example: Importing 1 million records into a CRM system and measuring performance.
99. End-to-End Testing
End-to-End Testing validates complete workflows by simulating real user scenarios from start to finish. It ensures that all components of the system work together as expected.
This type of testing often involves multiple systems and integrations.
- Example: A user logs in, books a flight, makes payment, and receives a confirmation email.
100. User Acceptance Testing (UAT)
User Acceptance Testing is conducted by end users or stakeholders to validate that the system meets business requirements. It occurs at the end of the development cycle before go-live.
UAT provides the final approval for release.
- Example: A client tests the employee leave module to ensure it works according to their HR policies.