ISTQB Certification Syllabus, Certified Tester Foundation Level, The Fundamentals of Testing, Static Testing, Test Management, Tool support for Testing, and Test Design Techniques.
The ISTQB Certified Tester Foundation Level (CTFL) certification provides essential testing knowledge that can be put to practical use and, very importantly, explains the terminology and concepts that are used worldwide in the testing domain.
CTFL certification is recognized as a prerequisite to all other ISTQB certifications where Foundation Level is required.
ISTQB Foundation Level Certification (CTFL)
The Foundation Level certification is suitable for anyone who needs to demonstrate practical knowledge of the fundamental concepts of software testing including people in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers, and software developers.
Exam Mode: Online, Exam Duration: 60 minutes, Exam Pattern: Multiple Choice Questions, Total Marks: 40 and Pass Marks: 26 (65%).
ISTQB Certification Syllabus
i. The Fundamentals of Testing
- Why is testing necessary?
- What is Testing?
- Testing Principles
- Fundamental Test Process
- The psychology of Testing
ii. Testing throughout the life-cycle
- Software development models
- Test Levels (Ex. Unit testing, Component testing, Integration testing,etc.)
- Test types (Functional, non-functional, structural, change-related testing)
- Maintenance testing
iii. Static Testing
- Reviews and the Test process
- Review Process
- Static analysis by tools
iv. Test design Techniques
- Identifying test conditions and designing test cases
- Categories of test design techniques
- Specification based or Black Box techniques (eg. BVA, Equivalence Partitioning)
- Structure based or white Box techniques
- Experienced based techniques (Error guessing and Exploratory guessing)
- Choosing a Test techniques
v. Test Management
- Test organization
- Test Plans, estimates and strategies
- Test progress, monitoring and control
- Configuration management
- Risk and testing
- Incident management
v. Tool support for Testing
- Types of test tools
- Effective use of tools
- Introducing a tool into an organization
Seven Testing Principles
A number of testing principles have been suggested over the past 50 years and offer general guidelines common for all testing.
1. Testing shows the presence of defects, not their absence
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness.
2. Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts.
3. Early testing saves time and money
To find defects early, both static and dynamic test activities should be started as early as possible in the software development lifecycle. Early testing is sometimes referred to as shift left. Testing early in the software development lifecycle helps reduce or eliminate costly changes.
4. Defects cluster together
A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort.
5. Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects.
6. Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project.
7. Absence-of-errors is a fallacy
Some organizations expect that testers can run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfill the users’ needs and expectations, or that is inferior compared to other competing systems.
ISTQB Certification SyllabusFollow me on social media: