Quality Control involves the measurement of actual quality and determination of defects in a given program. Our Quality Control service is carried out in four main stages: service request, service agreement, service delivery, and service delivery completion, which is followed by a test report.
Quality Control Service Request
The service request is submitted to the Quality Control manager by the customer through a completed service request form. The Quality Control manager reviews the request and makes the estimate for the service based on the scope and level of activities requested by the customer. Determination of the estimates is a negotiated deal between the Quality Control manager and the customer and leads to the computation of the budget for QC for the project. Different levels of service are shown in below.
- Smoke/Quick: Tests basic Navigation High-level functionality for obvious visual bugs.
- Sanity/Partial Regression: Confirms that a recent program or code change has not adversely affected existing features, and executes Test Cases based on criticality.
- Full Regression: Confirms that a recent program or code change has not adversely affected existing features. Test Cases are executed for all areas regardless of criticality.
- Requirements Verification: Examines proposed requirements, features, and changes to ensure they are testable.
- Performance Monitoring: While testing, observes response, time lapses, duration and reports any thing that has an unusually long response.
- Installation: Tests your application installation instructions.
- Adhoc/Exploratory: Performs an informal assessment of the application.
- Functional: Assures that each element of the application meets the functional requirements cases.
- System: After receiving the fully integrated application (All features are released to testing), performs End-to-End scenario testing of the completed application.
- Stress/Load: This type of service only focuses on load and stress of a stable system to evaluate its robustness compared to a defined standard or target.
Segue Internal Service Level Agreement (SISLA)
Segue’s internal service level agreement (SISLA) represents the agreement made between the Project Manager (PM) and the QC manager to facilitate effective coordination and adherence to the stated requirements. The agreement is initiated by the start of a QC effort for a Segue project and ends when either the project manager or QC manager decides to terminate it. The decision to terminate the agreement may result from a change of responsibilities or failure to adhere to project requirements by either of the agreeing parties. Disputes arising from non-compliance with the terms and conditions of a SISLA are resolved by executive management.
A SISLA specifies the requirements of the customer and the requirements of the QC manager that facilitate a complete execution of the service request. It is the PM’s primary duty to provide the QC team with all the system requirements, including the business rules and tool requirements. In addition, the PM has a duty to inform the QC team about technical requirements of the system, which include the database design, database platform, browser type(s) SW language, operating system, and hardware. The QC lead, on the other hand is responsible for updating the PM and providing a plan for execution of the SISLA and service requirements. The QC lead also conducts a series of tests to measure the quality of and identify any defects in the system.
Service delivery is a series of activities and tests conducted by the QC team to ensure the full execution of the SISLA and service requirements. This process begins soon after the approval of the service request. The QC manager verifies the level of skills and the available human resources to carry out the specified tasks. Based on the available resources, the QC manager designs the Test Strategy Document, which facilitates execution of all required tasks. This is followed by task analysis, which involves identification of the level of testing needed, task plan, suitable time allocation. The QC manager assigns the tasks to the members of the QC team according to their knowledge and skills. Once the tasks are assigned to QC team members, the QC lead designs the test by performing three major tasks, namely requirement verification, functional risk verification, and test case generation. The team sets up a production-like environment, which helps in management of test data.
The test team uses the test risk analysis results and test strategy to perform the actual tests. The test execution cycle consists of three phases, which include entry criteria, core validation, and exit criteria. The test team conducts a “lessons learned” exercise that is scheduled two days after completion of every test cycle. The lessons learned session helps in collection of critical information for improving testing for subsequent cycles and/or projects, such as:
- Factors that resulted in extemporary performance of some tests
- Issues arising from the transition between development and testing phases
- Detection of recurring defects and risks missed from the previous test cycles
- The makeup of the level of training needed.
Solutions for resolving defects are identified and a decision for improvements in the subsequent cycle made.
Service Delivery Completion
After completing the service delivery phase, the team submits a QC report to the PM to furnish them with information regarding the state of the system and lessons learned during the test. Delivery of the Test Project Report by the QC manager/lead and its reception by the PM, finalizes the service delivery process unless a new service request is made to initiate a new agreement.