QA Testing Definitions

1. Unit Testing

Unit Testing is a software testing method in which individual components or functions of a program are tested in isolation. It is typically performed by developers to ensure that specific modules behave correctly as per requirements.

Unit tests are written using frameworks like JUnit, NUnit, or TestNG. The goal is to catch bugs early and ensure code correctness at a granular level.

2. Integration Testing

Integration Testing checks the interaction between different software modules after they have been unit tested. This testing ensures that modules work together as expected when combined.

It uncovers issues such as incorrect data exchange, logic mismatch, or integration errors between modules. Techniques include Top-Down, Bottom-Up, and Big Bang.

3. System Testing

System Testing validates the complete and integrated software system against specified requirements. It simulates real-world use and is typically conducted by the QA team in an environment similar to production.

This is a high-level testing phase where functionality, performance, security, and other aspects are checked together.

4. Smoke Testing

Smoke Testing is a preliminary test performed to check the basic functionality of an application. It acts like a "sanity check" to determine whether a new build is stable enough for deeper testing.

This testing is shallow and wide, focusing on critical features only.

5. Sanity Testing

Sanity Testing is a quick test focused on verifying specific functionalities after minor code changes or bug fixes. It ensures that recent changes have not affected the intended area.

It is usually narrow and deep in scope, unlike smoke testing.

6. Regression Testing

Regression Testing involves re-running previously passed test cases to confirm that recent changes haven't introduced new bugs or broken existing functionality.

This is critical in agile development where frequent changes are common.

7. Functional Testing

Functional Testing ensures the application behaves according to the defined functional requirements. It validates that each function of the software performs its task correctly.

This testing is typically black-box based and focuses on user interactions and outputs.

8. Non-Functional Testing

Non-Functional Testing evaluates attributes such as performance, usability, reliability, and scalability. It ensures the software's quality under various conditions beyond functional correctness.

This helps improve user experience and system efficiency.

9. Usability Testing

Usability Testing measures how user-friendly, efficient, and pleasant the software is for real users. It focuses on improving user experience and identifying navigational or design issues.

Feedback is gathered from users through observation and interviews.

10. Acceptance Testing

Acceptance Testing is performed by end users or clients to validate that the system meets their expectations and business needs. It usually occurs before the final release of the product.

It confirms that the delivered system is ready for production use.

11. Alpha Testing

Alpha Testing is a type of acceptance testing performed by internal employees or testers before releasing the product to actual users. It helps catch bugs early and collect internal feedback.

This is done in a controlled development environment.

12. Beta Testing

Beta Testing involves releasing the software to a limited number of external users for real-world feedback. It helps identify unexpected bugs and collect usability insights before a full launch.

Feedback from beta testers often leads to final improvements.

13. Exploratory Testing

Exploratory Testing is an informal testing approach where testers explore the application without predefined scripts. Testers use their experience, intuition, and creativity to find bugs.

This technique is useful for quickly discovering unexpected issues.

14. Ad-Hoc Testing

Ad-Hoc Testing is an unstructured and informal method of testing where the objective is to find defects by randomly using the application. It does not follow any test plan or documentation.

This is often done when there is limited time or documentation available.

15. Performance Testing

Performance Testing evaluates how well an application performs under expected workloads. It checks speed, responsiveness, and stability during peak usage.

This ensures the system can handle the expected traffic and data volume.

16. Load Testing

Load Testing is a type of performance testing where the system is subjected to a specific load to determine its behavior under normal and peak conditions.

The goal is to identify bottlenecks and ensure consistent performance.

17. Stress Testing

Stress Testing evaluates the system’s ability to maintain functionality under extreme or beyond-limit conditions. It helps determine the system's breaking point.

This ensures the system fails gracefully under heavy load.

18. Volume Testing

Volume Testing, also known as flood testing, checks how the system handles a large volume of data. It ensures the application performs well when handling vast datasets.

This is especially important for database-heavy applications.

19. Security Testing

Security Testing ensures that the application is protected from external threats and data breaches. It validates authentication, authorization, encryption, and session management.

This protects sensitive user data and avoids compliance issues.

20. Compatibility Testing

Compatibility Testing checks whether the application works across different browsers, devices, operating systems, and networks. This ensures a consistent experience for all users.

It identifies rendering issues, layout bugs, or platform-specific errors.

21. Scalability Testing

Scalability Testing evaluates the system's ability to handle growth—in users, transactions, or data volume—without performance degradation. It is essential for systems expected to grow over time.

This testing helps identify how much the system can scale before upgrades or redesigns are necessary.

22. Recovery Testing

Recovery Testing verifies how well the system recovers after unexpected failures like crashes, network failures, or hardware malfunctions. It ensures the application can return to a stable state without data loss.

This testing is crucial for systems requiring high availability and reliability.

23. Maintainability Testing

Maintainability Testing assesses how easily a system can be modified to fix defects, add new features, or improve performance. It also evaluates code clarity and modularity.

Maintainable systems reduce long-term development costs and simplify future updates.

24. Accessibility Testing

Accessibility Testing ensures the application is usable by people with disabilities, including vision, hearing, and motor impairments. It often involves screen readers, keyboard navigation, and color contrast checks.

This type of testing supports legal compliance and inclusivity.

25. Usability Testing

Usability Testing is conducted to evaluate how intuitive and user-friendly the application is. Testers observe real users interacting with the product to identify confusion or inefficiencies.

The aim is to refine the user interface and improve user satisfaction.

26. Compliance Testing

Compliance Testing ensures the application adheres to external regulations, standards, or legal requirements. It is often required in domains like healthcare, finance, and telecommunications.

This protects organizations from fines and legal consequences.

27. Localization Testing

Localization Testing checks whether the application functions and displays correctly in a specific language and cultural context. This includes formatting of dates, currencies, and translations.

It ensures a seamless experience for users in different regions.

28. Internationalization Testing

Internationalization Testing ensures that an application is built to support multiple languages and locales without code changes. It validates infrastructure readiness for translation and regional formats.

This is a prerequisite to successful localization.

29. Installation Testing

Installation Testing checks the installation, upgrade, and uninstallation processes of a software application. It ensures that the setup process is smooth, reliable, and error-free.

This helps users get started without technical issues.

30. Configuration Testing

Configuration Testing evaluates how the application behaves with different combinations of hardware, software, networks, or environments. It ensures the app works in a variety of setups.

This is essential for cross-platform or enterprise-grade applications.

31. Mobile Testing

Mobile Testing focuses on validating software applications designed for mobile devices like smartphones and tablets. It checks for usability, responsiveness, performance, and compatibility across devices.

Mobile apps require special attention due to diverse OS versions, screen sizes, and hardware.

32. Cross-Browser Testing

Cross-Browser Testing ensures that web applications display and function consistently across multiple browsers. Different browsers may interpret HTML, CSS, and JavaScript differently.

This testing helps deliver a uniform user experience.

33. Data-Driven Testing

Data-Driven Testing is an automated testing method where test scripts are executed with multiple sets of input data. It separates test logic from test data for efficiency and reusability.

This approach helps achieve broad test coverage with fewer scripts.

34. Keyword-Driven Testing

Keyword-Driven Testing uses high-level keywords to represent test steps in a spreadsheet or external file. These keywords are mapped to code functions in the automation framework.

This method allows non-programmers to create tests without writing code.

35. Manual Testing

Manual Testing involves testers executing test cases manually without using automation tools. It is essential for exploratory, usability, and ad-hoc testing where human intuition is key.

Though time-consuming, manual testing remains important for UI/UX evaluations and complex scenarios.

36. Automated Testing

Automated Testing uses scripts and tools to execute test cases automatically. It accelerates regression, functional, and performance testing and reduces human error.

Popular tools include Selenium, Playwright, Cypress, and JUnit.

37. Continuous Testing

Continuous Testing is the practice of running automated tests throughout the software development lifecycle, particularly within CI/CD pipelines. It ensures that defects are caught early and software is always releasable.

This supports agile and DevOps practices by enabling fast feedback loops.

38. Headless Testing

Headless Testing runs browser-based tests without opening a visible UI. It is useful for speeding up execution in CI environments and saving system resources.

Headless browsers like Chrome Headless or Playwright Headless are commonly used.

39. API Testing

API Testing validates the functionality, reliability, performance, and security of Application Programming Interfaces (APIs). It involves sending requests to endpoints and verifying responses.

Tools like Postman, REST Assured, and SoapUI are commonly used.

40. UI Testing

UI Testing, or User Interface Testing, checks the visual elements of an application such as buttons, menus, and layouts. It ensures they appear and function as intended across devices and browsers.

This testing focuses on visual consistency, alignment, and responsiveness.

41. Code Coverage Testing

Code Coverage Testing measures how much of the source code is exercised by the test suite. It helps identify untested parts of the application and improve test completeness.

Metrics include line coverage, branch coverage, and function coverage.

42. Static Testing

Static Testing involves examining the software's code or documentation without executing the program. It aims to detect errors early in the development process using reviews, walkthroughs, and inspections.

This type of testing helps catch issues like syntax errors, coding standard violations, and poor design.

43. Dynamic Testing

Dynamic Testing evaluates software by executing code to validate functionality and behavior. It detects issues like logic errors, integration faults, and unexpected output during runtime.

This is the most common form of testing and includes unit, system, and acceptance testing.

44. Boundary Value Testing

Boundary Value Testing focuses on checking values at the edge of input ranges, where defects often occur. It helps uncover off-by-one errors and input handling issues.

This technique is especially useful in numeric or range-based input fields.

45. Equivalence Partitioning

Equivalence Partitioning divides input data into valid and invalid partitions. Test cases are derived from each group to reduce the total number of tests while maintaining coverage.

This method avoids redundant test cases by grouping similar inputs.

46. Decision Table Testing

Decision Table Testing is a technique used to test different input combinations and their corresponding outputs. It is especially useful in applications with complex business logic or rules.

Each column in the table represents a unique test scenario based on conditions and actions.

47. State Transition Testing

State Transition Testing checks how the system transitions from one state to another based on inputs. It is ideal for systems where behavior changes depending on current conditions or events.

This approach uses state diagrams or tables to design test cases.

48. Use Case Testing

Use Case Testing involves designing test cases based on user interactions or workflows. It helps validate real-life user scenarios from start to finish.

This technique ensures that the application supports user goals as intended.

49. Error Guessing

Error Guessing is a technique where testers use experience and intuition to identify problematic areas in the application. It does not rely on formal test design techniques.

This method is often used after formal testing to uncover hidden defects.

50. Pairwise Testing

Pairwise Testing is a combinatorial testing technique where all possible discrete combinations of pairs of input parameters are tested. It helps achieve maximum coverage with minimal test cases.

This method is ideal for reducing the number of tests in complex input scenarios.

51. Orthogonal Array Testing

Orthogonal Array Testing is a statistical method used to generate test cases with maximum coverage using minimal combinations. It is useful for complex systems with multiple parameters.

This structured approach improves test efficiency and defect detection rate.

52. Model-Based Testing

Model-Based Testing creates test cases from models that represent the desired behavior of the system. These models may include state machines, flowcharts, or activity diagrams.

It ensures that tests reflect both structure and dynamic interactions of the application.

53. Cause-Effect Graphing

Cause-Effect Graphing is a technique that maps causes (input conditions) to effects (output actions). The graph is then converted into a decision table for test case generation.

It is especially helpful for testing complex logic with multiple inputs and conditions.

54. Mutation Testing

Mutation Testing checks the effectiveness of existing test cases by making small changes (mutations) to the code. If tests fail for the mutated code, they are considered strong. If they pass, test gaps exist.

It helps assess the quality and robustness of test suites.

55. Error Seeding

Error Seeding involves deliberately adding known bugs to the software to evaluate the efficiency of the testing process. The number of seeded bugs found is used to estimate how many real bugs may remain.

This method provides insights into test coverage and team effectiveness.

56. Visual Testing

Visual Testing ensures that the graphical interface of an application appears correctly and consistently. It compares UI screenshots pixel-by-pixel to detect misalignment or rendering issues.

This technique is widely used in responsive and cross-browser design validation.

57. Snapshot Testing

Snapshot Testing captures the output of components (especially UI) and compares them against a stored snapshot. If the current output differs, the test fails, indicating changes in appearance or behavior.

It is commonly used in frontend frameworks like React.

58. Assertion Testing

Assertion Testing validates outcomes by using assertions—statements that check whether a condition is true. Assertions are used throughout test scripts to confirm expected vs. actual behavior.

This is a core element of automated testing frameworks like JUnit, TestNG, or PyTest.

59. Mocking

Mocking involves simulating components or external systems that are not yet available or are difficult to use during testing. It enables testers to isolate the unit under test.

Mocks are essential in unit and integration testing to mimic dependencies.

60. Stubbing

Stubbing is a technique where simplified implementations of functions or modules are used in place of the real ones. It helps isolate test logic and control the behavior of external dependencies.

Stubs often return hardcoded values to simulate specific conditions.

61. Driver Script

A driver script is a central control file in automated testing that calls various test scripts and components. It manages the flow of execution and handles input, output, and logging.

This is especially useful in data-driven or modular frameworks.

62. Test Harness

A test harness is a collection of test data, scripts, and utilities used to automate the testing process. It provides a consistent testing environment and reports results for validation.

It often includes drivers and stubs to simulate missing components.

63. Test Data

Test data consists of input values used during test execution to verify software behavior. It can be static, generated, or fetched from external files, and is crucial for meaningful test cases.

Managing test data effectively improves test accuracy and repeatability.

64. Bug Report

A bug report documents an issue found during testing, including steps to reproduce, expected and actual results, severity, and screenshots. Clear bug reports accelerate the debugging process.

They are typically created in tools like Jira, Bugzilla, or Azure DevOps.

65. Defect Life Cycle

The defect life cycle describes the stages a bug goes through—from identification to closure. Common stages include New, Assigned, In Progress, Fixed, Retested, and Closed.

Managing this cycle ensures traceability and accountability of defects.

66. Test Plan

A test plan is a formal document outlining the strategy, scope, resources, schedule, and deliverables for the testing process. It serves as a blueprint for how testing will be conducted.

It ensures clarity among teams and aligns testing with project goals.

67. Test Case

A test case defines a specific input, execution condition, and expected result used to verify a particular software feature. It includes steps to follow, data to input, and outcomes to validate.

Well-written test cases improve test coverage and reproducibility.

68. Test Scenario

A test scenario is a high-level description of what needs to be tested. It often represents a user story or use case and may encompass multiple test cases.

Scenarios help testers focus on the goal of the feature rather than technical steps.

69. Test Strategy

A test strategy outlines the overall approach to testing, including test types, environments, tools, and automation plans. It is often a part of the master test plan and guides decision-making.

This ensures consistency across different teams and releases.

70. Test Summary Report

The test summary report is a document generated at the end of a testing cycle. It summarizes the testing activities, results, metrics, issues, and recommendations.

This report helps stakeholders evaluate product readiness.

71. Severity

Severity defines the impact of a defect on the application’s functionality or system operation. It is assigned by the tester and ranges from critical to low based on how badly the feature is broken.

Severity guides prioritization and defect triaging.

72. Priority

Priority indicates how urgently a defect should be fixed. It is typically assigned by the project or product manager based on release schedules, business needs, and impact.

Priority may differ from severity in some cases.

73. Test Log

A test log is a detailed record of test execution, capturing actions performed, environment details, results, and any anomalies encountered. Logs are vital for debugging and audit purposes.

Automation tools often generate logs automatically.

74. Traceability Matrix

The traceability matrix maps requirements to their corresponding test cases to ensure all features are covered. It ensures that no requirement is missed during testing.

This matrix is also useful for impact analysis and audit trails.

75. Build Verification Testing (BVT)

BVT, also known as a build acceptance test or smoke test, is a subset of tests run on each new build to ensure that it is stable enough for further testing.

It provides quick feedback and avoids wasting time on broken builds.

76. Environment Setup

Environment setup involves configuring hardware, software, network, and tools required to test an application. This includes operating systems, browsers, databases, and testing frameworks.

Proper setup reduces flaky test results and execution issues.

77. Test Environment

The test environment is a replica of the production or staging system where testing is performed. It includes the application, servers, test data, configurations, and third-party integrations.

Stable environments reduce false positives and increase test reliability.

78. CI/CD

CI/CD (Continuous Integration/Continuous Deployment) is a DevOps practice that automates code integration, testing, and delivery. It speeds up the release process and reduces manual errors.

Testing is a key component of the CI/CD pipeline to ensure release readiness.

79. Code Review

Code review is the manual examination of code by developers to detect defects, improve quality, and share knowledge. It often occurs before testing and reduces the number of bugs caught later.

Tools like GitHub and Bitbucket simplify review processes with pull requests.

80. Shift Left Testing

Shift Left Testing refers to moving testing activities earlier in the software development lifecycle. It encourages early validation, continuous integration, and collaboration between developers and testers.

This approach helps reduce defects and shortens feedback loops.

81. Shift Right Testing

Shift Right Testing involves testing in the production environment or post-deployment phase. It emphasizes real-world usage, monitoring, and user feedback to uncover issues missed earlier.

This approach is useful for performance validation and A/B testing.

82. Chaos Testing

Chaos Testing involves deliberately injecting failures into a system to test its resilience and recovery capabilities. It helps identify weaknesses in distributed systems and cloud infrastructure.

This technique is often used in DevOps and site reliability engineering (SRE).

83. A/B Testing

A/B Testing is a technique to compare two versions of a webpage or feature to determine which performs better based on user behavior. It’s widely used in UI/UX optimization.

It relies on statistical analysis of real user interactions.

84. Canary Testing

Canary Testing is the practice of releasing a new version of software to a small subset of users before rolling it out to the entire user base. It minimizes risk by allowing early issue detection.

It's commonly used in continuous delivery pipelines.

85. Smoke Testing

Smoke Testing, also known as “build verification testing,” is a quick set of tests to check basic functionality of an application. It ensures critical paths work and the build is stable enough for deeper testing.

This testing is performed after each new build.

86. Sanity Testing

Sanity Testing is a brief run-through to verify that a specific feature or bug fix is working as expected. It is typically performed after receiving a new software build with minor changes.

It is narrower and more focused than smoke testing.

87. Exploratory Testing

Exploratory Testing involves simultaneous learning, test design, and execution without predefined test cases. It relies on the tester’s intuition and knowledge to uncover unexpected issues.

This type of testing is flexible and often uncovers edge cases.

88. Ad-hoc Testing

Ad-hoc Testing is informal testing without planning or documentation. It focuses on breaking the system using random scenarios, often guided by tester creativity and past experience.

It helps find bugs that structured testing might miss.

89. Benchmark Testing

Benchmark Testing evaluates an application’s performance against industry standards or competitors. It establishes a baseline to measure future improvements or regressions.

Common metrics include throughput, response time, and CPU usage.

90. Certification Testing

Certification Testing ensures that a software product complies with industry standards and receives formal approval. It is common in domains like telecommunications, healthcare, and finance.

This testing is usually done by third-party organizations.

91. Regression Testing

Regression Testing verifies that recent code changes have not adversely affected existing functionality. It is crucial for ensuring long-term stability of software as it evolves.

Automated regression suites are ideal for frequent updates.

92. Retesting

Retesting is done to verify that specific defects have been fixed. Unlike regression testing, which checks surrounding areas, retesting focuses only on the failed test cases from previous runs.

This ensures the reported issue is resolved correctly.

93. Compatibility Testing

Compatibility Testing checks whether an application works as expected across different browsers, devices, networks, and operating systems. It ensures a consistent experience for all users.

This is critical for web and mobile apps.

94. Latency Testing

Latency Testing measures the time delay between a user action and the system’s response. It helps evaluate application responsiveness, especially in real-time systems like video conferencing apps.

This is often part of performance and load testing.

95. Load Testing

Load Testing determines how a system behaves under expected user loads. It ensures the application can perform reliably when multiple users access it simultaneously.

This helps identify bottlenecks before production use.

96. Spike Testing

Spike Testing examines the system’s performance when sudden and extreme changes in load occur. It checks how well the system handles traffic surges or drop-offs.

This helps prepare for scenarios like flash sales or viral traffic spikes.

97. Soak Testing

Soak Testing, or endurance testing, checks how the system performs under sustained load over a long period. It identifies issues like memory leaks, database saturation, or performance degradation.

This test helps ensure long-term stability and reliability.

98. Volume Testing

Volume Testing evaluates how the system handles large volumes of data, rather than user load. It tests the application's ability to process, store, and retrieve massive datasets efficiently.

It helps uncover issues like slow queries and data loss.

99. End-to-End Testing

End-to-End Testing validates complete workflows by simulating real user scenarios from start to finish. It ensures that all components of the system work together as expected.

This type of testing often involves multiple systems and integrations.

100. User Acceptance Testing (UAT)

User Acceptance Testing is conducted by end users or stakeholders to validate that the system meets business requirements. It occurs at the end of the development cycle before go-live.

UAT provides the final approval for release.