Test Case Design: Testers create detailed test cases based on software requirements and specifications. Test cases outline step-by-step instructions for testing various features and functionalities.
Test Execution: Testers manually execute the test cases, interacting with the software as end-users would. This involves checking the user interface, inputting data, and verifying that the software behaves correctly under different conditions.
Functional Testing: This type of testing focuses on validating that each function of the software performs according to the requirements. It includes unit testing, integration testing, and system testing.
User Interface Testing: Testers evaluate the graphical user interface (GUI) to ensure it is user-friendly, visually appealing, and aligns with design specifications.
Usability Testing: This involves assessing how easy it is for end-users to interact with the software. Usability testing considers factors such as navigation, intuitiveness, and overall user experience.
Regression Testing: After changes or updates to the software, testers perform regression testing to ensure that existing functionalities have not been adversely affected.
Error or Bug Reporting: Testers document and report any discrepancies or defects they encounter during testing. This information is crucial for developers to understand and fix issues.
Ad Hoc Testing: Testers may also perform spontaneous, unplanned testing to explore the software and identify potential issues that may not be covered by formal test cases.
Compatibility Testing: Ensures that the software functions correctly across different environments, devices, browsers, or operating systems.
Performance Testing: Although often automated, certain aspects of performance testing may involve manual intervention, such as analyzing user experience under varying loads.
User Acceptance Testing (UAT): In UAT, end-users or stakeholders validate that the software meets their business requirements before it is released.