Inadequate software testing costs 59 billion per year in the US alone.
Software testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the software or service under test.
Testing consists of the dynamic verification of the behaviour of a program on a finite set of test cases suitably selected from the usually infinite executions domain against the specified expected behaviour.
Validation: does the software system meet the user’s real needs?
Verification: does the software system meet the requirements specifications?
Sources of software problems:
● Requirements definition
● Design
● Implementation
● Support systems
● Inadequate testing
● Evolution
Static analysis: determines/estimates software quality without reference to actual execution
Dynamic analysis: ascertains/approximates software quality through actual execution
Functional testing:
● black box testing
● external behaviour
● based on requirements specifications
● any level of granularity where spec is available
● benefit from rich domain & application knowledge
● can identify omissions
Structural testing:
● white box testing
● internal structure
● focused on specific statements/branches/paths
● evaluated against coverage criteria
● tied to specific program structures
● deep knowledge of actual algorithms
● can find surprises
Model-based testing:
● use models of aspects of the system and its behaviour to guide test case generation
Failure: manifested inability of a system to perform required function
Fault: incorrect/missing code
Error: human action producing fault
Testing: triggering failures
Debugging: finding faults given a failure
Observability: can I see what’s going on inside?
Controllability: can I influence what’s happening?
Testing challenges:
● Explosion of input combinations
● Explosion of path combinations
● Most interesting properties undecidable
● Faults may be hard to reach
● Faults may hide (observability)
● Faults may be hard to execute at will (controllability)
● Collected data & the external environment
Exploratory testing:
● emphasizes personal freedom and responsibility of the individual tester
● in test-related learning, design, execution, result interpretation
● runs parallel throughout the project
● Goals:
○ gain understanding of how a system works
○ force software to exhibit its capabilities
○ find bugs
● Touring tests:
○ Guidebook: use the manual
○ Money: money-generating features
○ Landmark: key features
○ Museum: legacy features
○ Rained-out: start and cancel operations
○ Couch potato: do as little as possible
○ Antisocial: known bad inputs
● Charters:
○ name, time/date, duration, tester
○ target
○ data/resources
○ test notes
Limitations of manual testing:
● black box testing
● UI required
● not easily repeatable
● not systematic
Law of Continuing Change: a system that is used undergoes continuing change until it becomes more economical to replace it by a new or restructured system
Law of Increasing Entropy: the entropy of a system increases with time unless specific work is executed to maintain or reduce it
Test automation:
● Pros:
○ facilitates continuous regression testing
○ more test cases
○ more advanced test cases
○ repeatable test suites
● Cons:
○ cost of setting up test infrastructure
○ maintenance cost of test suites
Properties of models:
● compact
● predictive
● semantically meaningful: permits diagnosis of the causes of failure
● sufficiently general
Intraprocedural control flow graphs:
● nodes are basic blocks of source code with single entry and single exit point
● directed edges are possible proceedings between basic blocks
Linear code sequence and jump (LCSJ):
● subpaths of the control flow graph from one branch to another
Interprocedural control flow graphs:
● between functions, not within
● aka call graphs
● context sensitive: distinguish between different parameters