This site is from a past semester! The current version will be here when the new semester starts.
CS2103/T 2020 Jan-Apr
  • Full Timeline
  • Week 1 [Jan 13]
  • Week 2 [Jan 20]
  • Week 3 [Jan 27]
  • Week 4 [Feb 3]
  • Week 5 [Feb 10]
  • Week 6 [Feb 17]
  • Week 7 [Mar 2]
  • Week 8 [Mar 9]
  • Week 9 [Mar 16]
  • Week 10 [Mar 23]
  • Week 11 [Mar 30]
  • Week 12 [Apr 6]
  • Week 13 [Apr 13]
  • Textbook
  • Admin Info
  • Report Bugs
  • Forum
  • Instructors
  • Announcements
  • File Submissions
  • Tutorial Schedule
  • Java Coding Standard
  • Participation Marks List

  •  Individual Project (iP):
  • Individual Project Info
  • Duke Upstream Repo
  • iP Code Dashboard
  • iP Showcase

  •  Team Project (tP):
  • Team Project Info
  • Team IDs
  • Addressbook-level3
  • Addressbook-level 1,2,4
  • tP Code Dashboard
  • tP Showcase
  • Test Case Design

    Introduction

    Can explain the need for deliberate test case design

    Except for trivial Software Under TestSUTs, testing all possible casesexhaustive testing is not practical because such testing often requires a massive/infinite number of test cases.

    Consider the test cases for adding a string object to a Java: ArrayList,
    Python: list
    collection
    :

    • Add an item to an empty collection.
    • Add an item when there is one item in the collection.
    • Add an item when there are 2, 3, .... n items in the collection.
    • Add an item that has an English, a French, a Spanish, ... word.
    • Add an item that is the same as an existing item.
    • Add an item immediately after adding another item.
    • Add an item immediately after system startup.
    • ...

    Exhaustive testing of this operation can take many more test cases.

    Program testing can be used to show the presence of bugs, but never to show their absence!
    --Edsger Dijkstra

    Every test case adds to the cost of testing. In some systems, a single test case can cost thousands of dollars e.g. on-field testing of flight-control software. Therefore, test cases need to be designed to make the best use of testing resources. In particular:

    • Testing should be effective i.e., it finds a high percentage of existing bugs e.g., a set of test cases that finds 60 defects is more effective than a set that finds only 30 defects in the same system.

    • Testing should be efficient i.e., it has a high rate of success (bugs found/test cases) a set of 20 test cases that finds 8 defects is more efficient than another set of 40 test cases that finds the same 8 defects.

    For testing to be Efficient and EffectiveE&E, each new test we add should be targeting a potential fault that is not already targeted by existing test cases. There are test case design techniques that can help us improve E&E of testing.

    Given below is the sample output from a text-based program TriangleDetector ithat determines whether the three input numbers make up the three sides of a valid triangle. List test cases you would use to test this software. Two sample test cases are given below.

    C:\> java TriangleDetector
    Enter side 1: 34
    Enter side 2: 34
    Enter side 3: 32
    Can this be a triangle?: Yes
    Enter side 1:

    Sample test cases,

    34,34,34: Yes
    0, any valid, any valid: No

    In addition to obvious test cases such as

    • sum of two sides == third,
    • sum of two sides < third ...

    We may also devise some interesting test cases such as the ones depicted below.

    Note that their applicability depends on the context in which the software is operating.

    • Non-integer number, negative numbers, 0, numbers formatted differently (e.g. 13F), very large numbers (e.g. MAX_INT), numbers with many decimal places, empty string, ...
    • Check many triangles one after the other (will the system run out of memory?)
    • Backspace, tab, CTRL+C , …
    • Introduce a long delay between entering data (will the program be affected by, say the screensaver?), minimize and restore window during the operation, hibernate the system in the middle of a calculation, start with invalid inputs (the system may perform error handling differently for the very first test case), …
    • Test on different locale.

    The main point to note is how difficult it is to test exhaustively, even on a trivial system.

    Explain the why exhaustive testing is not practical using the example of testing newGame() operation in the Logic class of a Minesweeper game.

    Consider this sequence of test cases:

    • Test case 1. Start Minesweeper. Activate newGame() and see if it works.
    • Test case 2. Start Minesweeper. Activate newGame(). Activate newGame() again and see if it works.
    • Test case 3. Start Minesweeper. Activate newGame() three times consecutively and see if it works.
    • Test case 267. Start Minesweeper. Activate newGame() 267 times consecutively and see if it works.

    Well, you get the idea. Exhaustive testing of newGame() is not practical.

    Improving efficiency and effectiveness of test case design can,

    • a. improve the quality of the SUT.
    • b. save money.
    • c. save time spent on test execution.
    • d. save effort on writing and maintaining tests.
    • e. minimize redundant test cases.
    • f. forces us to understand the SUT better.

    (a)(b)(c)(d)(e)(f)

    Can explain positive and negative test cases

    A positive test case is when the test is designed to produce an expected/valid behavior. A negative test case is designed to produce a behavior that indicates an invalid/unexpected situation, such as an error message.

    Consider testing of the method print(Integer i) which prints the value of i.

    • A positive test case: i == new Integer(50)
    • A negative test case: i == null;
    Can explain black box and glass box test case design

    Test case design can be of three types, based on how much of SUT internal details are considered when designing test cases:

    • Black-box (aka specification-based or responsibility-based) approach: test cases are designed exclusively based on the SUT’s specified external behavior.

    • White-box (aka glass-box or structured or implementation-based) approach: test cases are designed based on what is known about the SUT’s implementation, i.e. the code.

    • Gray-box approach: test case design uses some important information about the implementation. For example, if the implementation of a sort operation uses different algorithms to sort lists shorter than 1000 items and lists longer than 1000 items, more meaningful test cases can then be added to verify the correctness of both algorithms.

    Note: these videos are from the Udacity course Software Development Process by Georgia Tech

    Equivalence Partitions

    Can explain equivalence partitions

    Consider the testing of the following operation.

    isValidMonth(m) : returns true if m (and int) is in the range [1..12]

    It is inefficient and impractical to test this method for all integer values [-MIN_INT to MAX_INT]. Fortunately, there is no need to test all possible input values. For example, if the input value 233 failed to produce the correct result, the input 234 is likely to fail too; there is no need to test both.

    In general, most SUTs do not treat each input in a unique way. Instead, they process all possible inputs in a small number of distinct ways. That means a range of inputs is treated the same way inside the SUT. Equivalence partitioning (EP) is a test case design technique that uses the above observation to improve the E&E of testing.

    Equivalence partition (aka equivalence class): A group of test inputs that are likely to be processed by the SUT in the same way.

    By dividing possible inputs into equivalence partitions we can,

    • avoid testing too many inputs from one partition. Testing too many inputs from the same partition is unlikely to find new bugs. This increases the efficiency of testing by reducing redundant test cases.
    • ensure all partitions are tested. Missing partitions can result in bugs going unnoticed. This increases the effectiveness of testing by increasing the chance of finding bugs.
    Can apply EP for pure functions

    Equivalence partitions (EPs) are usually derived from the specifications of the SUT.

    These could be EPs for the isValidMonth example:

    • [MIN_INT ... 0] : below the range that produces true (produces false)
    • [1 … 12] : the range that produces true
    • [13 … MAX_INT] : above the range that produces true (produces false)
    isValidMonth

    isValidMonth(m) : returns true if m (and int) is in the range [1..12]

    When the SUT has multiple inputs, you should identify EPs for each input.

    Consider the method duplicate(String s, int n): String which returns a String that contains s repeated n times.

    Example EPs for s:

    • zero-length strings
    • string containing whitespaces
    • ...

    Example EPs for n:

    • 0
    • negative values
    • ...

    An EP may not have adjacent values.

    Consider the method isPrime(int i): boolean that returns true if i is a prime number.

    EPs for i:

    • prime numbers
    • non-prime numbers

    Some inputs have only a small number of possible values and a potentially unique behavior for each value. In those cases we have to consider each value as a partition by itself.

    Consider the method showStatusMessage(GameStatus s): String that returns a unique String for each of the possible value of s (GameStatus is an enum). In this case, each possible value for s will have to be considered as a partition.

    Note that the EP technique is merely a heuristic and not an exact science, especially when applied manually (as opposed to using an automated program analysis tool to derive EPs). The partitions derived depend on how one ‘speculates’ the SUT to behave internally. Applying EP under a glass-box or gray-box approach can yield more precise partitions.

    Consider the method EPs given above for the isValidMonth. A different tester might use these EPs instead:

    • [1 … 12] : the range that produces true
    • [all other integers] : the range that produces false

    Some more examples:

    Specification Equivalence partitions

    isValidFlag(String s): boolean
    Returns true if s is one of ["F", "T", "D"]. The comparison is case-sensitive.

    ["F"] ["T"] ["D"] ["f", "t", "d"] [any other string][null]

    squareRoot(String s): int
    Pre-conditions: s represents a positive integer
    Returns the square root of s if the square root is an integer; returns 0 otherwise.

    [s is not a valid number] [s is a negative integer] [s has an integer square root] [s does not have an integer square root]

    Consider this SUT:

    isValidName (String s): boolean

    Description: returns true if s is not null and not longer than 50 characters.

    A. Which one of these is least likely to be an equivalence partition for the parameter s of the isValidName method given below?

    B. If you had to choose 3 test cases from the 4 given below, which one will you leave out based on the EP technique?

    A. (d)

    Explanation: The description does not mention anything about the content of the string. Therefore, the method is unlikely to behave differently for strings consisting of numbers.

    B. (a) or (c)

    Explanation: both belong to the same EP

    Can apply EP for OOP methods

    When deciding EPs of OOP methods, we need to identify EPs of all data participants that can potentially influence the behaviour of the method, such as,

    • the target object of the method call
    • input parameters of the method call
    • other data/objects accessed by the method such as global variables. This category may not be applicable if using the black box approach (because the test case designer using the black box approach will not know how the method is implemented)

    Consider this method in the DataStack class: push(Object o): boolean

    • Adds o to the top of the stack if the stack is not full.
    • returns true if the push operation was a success.
    • throws
      • MutabilityException if the global flag FREEZE==true.
      • InvalidValueException if o is null.

    EPs:

    • DataStack object: [full] [not full]
    • o: [null] [not null]
    • FREEZE: [true][false]

    Consider a simple Minesweeper app. What are the EPs for the newGame() method of the Logic component?

    As newGame() does not have any parameters, the only obvious participant is the Logic object itself.

    Note that if the glass-box or the grey-box approach is used, other associated objects that are involved in the method might also be included as participants. For example, Minefield object can be considered as another participant of the newGame() method. Here, the black-box approach is assumed.

    Next, let us identify equivalence partitions for each participant. Will the newGame() method behave differently for different Logic objects? If yes, how will it differ? In this case, yes, it might behave differently based on the game state. Therefore, the equivalence partitions are:

    • PRE_GAME : before the game starts, minefield does not exist yet
    • READY : a new minefield has been created and waiting for player’s first move
    • IN_PLAY : the current minefield is already in use
    • WON, LOST : let us assume the newGame behaves the same way for these two values

    Consider the Logic component of the Minesweeper application. What are the EPs for the markCellAt(int x, int y) method?. The partitions in bold represent valid inputs.

    • Logic: PRE_GAME, READY, IN_PLAY, WON, LOST
    • x: [MIN_INT..-1] [0..(W-1)] [W..MAX_INT] (we assume a minefield size of WxH)
    • y: [MIN_INT..-1] [0..(H-1)] [H..MAX_INT]
    • Cell at (x,y): HIDDEN, MARKED, CLEARED

    Boundary Value Analysis

    Can explain boundary value analysis

    Boundary Value Analysis (BVA) is test case design heuristic that is based on the observation that bugs often result from incorrect handling of boundaries of equivalence partitions. This is not surprising, as the end points of the boundary are often used in branching instructions etc. where the programmer can make mistakes.

    markCellAt(int x, int y) operation could contain code such as if (x > 0 && x <= (W-1)) which involves boundaries of x’s equivalence partitions.

    BVA suggests that when picking test inputs from an equivalence partition, values near boundaries (i.e. boundary values) are more likely to find bugs.

    Boundary values are sometimes called corner cases.

    Boundary value analysis recommends testing only values that reside on the equivalence class boundary.

    False

    Explanation: It does not recommend testing only those values on the boundary. It merely suggests that values on and around a boundary are more likely to cause errors.

    Can apply boundary value analysis

    Typically, we choose three values around the boundary to test: one value from the boundary, one value just below the boundary, and one value just above the boundary. The number of values to pick depends on other factors, such as the cost of each test case.

    Some examples:

    Equivalence partition Some possible boundary values

    [1-12]

    0,1,2, 11,12,13

    [MIN_INT, 0]
    (MIN_INT is the minimum possible integer value allowed by the environment)

    MIN_INT, MIN_INT+1, -1, 0 , 1

    [any non-null String]

    Empty String, a String of maximum possible length

    [prime numbers]
    [“F”]
    [“A”, “D”, “X”]

    No specific boundary
    No specific boundary
    No specific boundary

    [non-empty Stack]
    (we assume a fixed size stack)

    Stack with: one element, two elements, no empty spaces, only one empty space

    Combining Test Inputs

    Can explain the need for strategies to combine test inputs

    An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).

    an SUT that takes multiple inputs and some values chosen as values for each input:

    • Method to test: calculateGrade(participation, projectGrade, isAbsent, examScore)
    • Values to test:
      Input valid values to test invalid values to test
      participation 0, 1, 19, 20 21, 22
      projectGrade A, B, C, D, F
      isAbsent true, false
      examScore 0, 1, 69, 70, 71, 72

    Testing all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases can be too high (i.e. not efficient). Therefore, we need smarter ways to combine test inputs that are both effective and efficient.

    Can explain some basic test input combination strategies

    Given below are some basic strategies for generating a set of test cases by combining multiple test input combination strategies.

    Let's assume the SUT has the following three inputs and you have selected the given values for testing:

    SUT: foo(p1 char, p2 int, p3 boolean)

    Values to test:

    Input Values
    p1 a, b, c
    p2 1, 2, 3
    p3 T, F

    The all combinations strategy generates test cases for each unique combination of test inputs.

    the strategy generates 3x3x2=18 test cases

    Test Case p1 p2 p3
    1 a 1 T
    2 a 1 F
    3 a 2 T
    ... ... ... ...
    18 c 3 F

    The at least once strategy includes each test input at least once.

    this strategy generates 3 test cases.

    Test Case p1 p2 p3
    1 a 1 T
    2 b 2 F
    3 c 3 VV/IV

    VV/IV = Any Valid Value / Any Invalid Value

    The all pairs strategy creates test cases so that for any given pair of inputs, all combinations between them are tested. It is based on the observations that a bug is rarely the result of more than two interacting factors. The resulting number of test cases is lower than the all combinations strategy, but higher than the at least once approach.

    this strategy generates 9 test cases:

    Let's first consider inputs p1 and p2:

    Input Values
    p1 a, b, c
    p2 1, 2, 3

    These values can generate (a,1)(a,2)(a,3)(b,1)(b,2),...3x3=9 combinations, and the test cases should cover all of them.

    Next, let's consider p1 and p3.

    Input Values
    p1 a, b, c
    p3 T, F

    These values can generate (a,T)(a,F)(b,T)(b,F),...3x2=6 combinations, and the test cases should cover all of them.

    Similarly, inputs p2 and p3 generates another 6 combinations.

    The 9 test cases given below covers all those 9+6+6 combinations.

    Test Case p1 p2 p3
    1 a 1 T
    2 a 2 T
    3 a 3 F
    4 b 1 F
    5 b 2 T
    6 b 3 F
    7 c 1 T
    8 c 2 F
    9 c 3 T

    A variation of this strategy is to test all pairs of inputs but only for inputs that could influence each other.

    Testing all pairs between p1 and p3 only while ensuring all p3 values are tested at least once

    Test Case p1 p2 p3
    1 a 1 T
    2 a 2 F
    3 b 3 T
    4 b VV/IV F
    5 c VV/IV T
    6 c VV/IV F

    The random strategy generates test cases using one of the other strategies and then pick a subset randomly (presumably because the original set of test cases is too big).

    There are other strategies that can be used too.

    Can apply heuristic ‘each valid input at least once in a positive test case’

    Consider the following scenario.

    SUT: printLabel(fruitName String, unitPrice int)

    Selected values for fruitName (invalid values are underlined ):

    Values Explanation
    Apple Label format is round
    Banana Label format is oval
    Cherry Label format is square
    Dog Not a valid fruit

    Selected values for unitPrice:

    Values Explanation
    1 Only one digit
    20 Two digits
    0 Invalid because 0 is not a valid price
    -1 Invalid because negative prices are not allowed

    Suppose these are the test cases being considered.

    Case fruitName unitPrice Expected
    1 Apple 1 Print label
    2 Banana 20 Print label
    3 Cherry 0 Error message “invalid price”
    4 Dog -1 Error message “invalid fruit"

    It looks like the test cases were created using the at least once strategy. After running these tests can we confirm that square-format label printing is done correctly?

    • Answer: No.
    • Reason: Cherry -- the only input that can produce a square-format label -- is in a negative test case which produces an error message instead of a label. If there is a bug in the code that prints labels in square-format, these tests cases will not trigger that bug.

    In this case a useful heuristic to apply is each valid input must appear at least once in a positive test case. Cherry is a valid test input and we must ensure that it appears at least once in a positive test case. Here are the updated test cases after applying that heuristic.

    Case fruitName unitPrice Expected
    1 Apple 1 Print round label
    2 Banana 20 Print oval label
    2.1 Cherry VV Print square label
    3 VV 0 Error message “invalid price”
    4 Dog -1 Error message “invalid fruit"

    VV/IV = Any Invalid or Valid Value VV=Any Valid Value

    Can apply heuristic ‘no more than one invalid input in a test case’

    Consider the test cases designed in [Heuristic: each valid input at least once in a positive test case].

    Case fruitName unitPrice Expected
    1 Apple 1 Print round label
    2 Banana 20 Print oval label
    2.1 Cherry VV Print square label
    3 VV 0 Error message “invalid price”
    4 Dog -1 Error message “invalid fruit"

    VV/IV = Any Invalid or Valid Value VV=Any Valid Value

    After running these test cases can you be sure that the error message “invalid price” is shown for negative prices?

    • Answer: No.
    • Reason: -1 -- the only input that is a negative price -– is in a test case that produces the error message “invalid fruit”.

    In this case a useful heuristic to apply is no more than one invalid input in a test case. After applying that, we get the following test cases.

    Case fruitName unitPrice Expected
    1 Apple 1 Print round label
    2 Banana 20 Print oval label
    2.1 Cherry VV Print square label
    3 VV 0 Error message “invalid price”
    4 VV -1 Error message “invalid price"
    4.1 Dog VV Error message “invalid fruit"

    VV/IV = Any Invalid or Valid Value VV=Any Valid Value

    Applying the heuristics covered so far, we can determine the precise number of test cases required to test any given SUT effectively.

    False

    Explanation: These heuristics are, well, heuristics only. They will help you to make better decisions about test case design. However, they are speculative in nature (especially, when testing in black-box fashion) and cannot give you precise number of test cases.

    Can apply multiple test input combination techniques together

    Consider the calculateGrade scenario given below:

    • SUT : calculateGrade(participation, projectGrade, isAbsent, examScore)
    • Values to test: invalid values are underlined
      • participation: 0, 1, 19, 20, 21, 22
      • projectGrade: A, B, C, D, F
      • isAbsent: true, false
      • examScore: 0, 1, 69, 70, 71, 72

    To get the first cut of test cases, let’s apply the at least once strategy.

    Test cases for calculateGrade V1

    Case No. participation projectGrade isAbsent examScore Expected
    1 0 A true 0 ...
    2 1 B false 1 ...
    3 19 C VV/IV 69 ...
    4 20 D VV/IV 70 ...
    5 21 F VV/IV 71 Err Msg
    6 22 VV/IV VV/IV 72 Err Msg

    VV/IV = Any Valid or Invalid Value, Err Msg = Error Message

    Next, let’s apply the each valid input at least once in a positive test case heuristic. Test case 5 has a valid value for projectGrade=F that doesn't appear in any other positive test case. Let's replace test case 5 with 5.1 and 5.2 to rectify that.

    Test cases for calculateGrade V2

    Case No. participation projectGrade isAbsent examScore Expected
    1 0 A true 0 ...
    2 1 B false 1 ...
    3 19 C VV 69 ...
    4 20 D VV 70 ...
    5.1 VV F VV VV ...
    5.2 21 VV/IV VV/IV 71 Err Msg
    6 22 VV/IV VV/IV 72 Err Msg

    VV = Any Valid Value VV/IV = Any Valid or Invalid Value

    Next, we apply the no more than one invalid input in a test case heuristic. Test cases 5.2 and 6 don't follow that heuristic. Let's rectify the situation as follows:

    Test cases for calculateGrade V3

    Case No. participation projectGrade isAbsent examScore Expected
    1 0 A true 0 ...
    2 1 B false 1 ...
    3 19 C VV 69 ...
    4 20 D VV 70 ...
    5.1 VV F VV VV ...
    5.2 21 VV VV VV Err Msg
    5.3 22 VV VV VV Err Msg
    6.1 VV VV VV 71 Err Msg
    6.2 VV VV VV 72 Err Msg

    Next, let us assume that there is a dependency between the inputs examScore and isAbsent such that an absent student can only have examScore=0. To cater for the hidden invalid case arising from this, we can add a new test case where isAbsent=true and examScore!=0. In addition, test cases 3-6.2 should have isAbsent=false so that the input remains valid.

    Test cases for calculateGrade V4

    Case No. participation projectGrade isAbsent examScore Expected
    1 0 A true 0 ...
    2 1 B false 1 ...
    3 19 C false 69 ...
    4 20 D false 70 ...
    5.1 VV F false VV ...
    5.2 21 VV false VV Err Msg
    5.3 22 VV false VV Err Msg
    6.1 VV VV false 71 Err Msg
    6.2 VV VV false 72 Err Msg
    7 VV VV true !=0 Err Msg

    Which of these contradict the heuristics recommended when creating test cases with multiple inputs?

    (a) inputs.

    Explanation: If you test all invalid test inputs together, you will not know if each one of the invalid inputs are handled correctly by the SUT. This is because most SUTs return an error message upon encountering the first invalid input.

    Apply heuristics for combining multiple test inputs to improve the E&E of the following test cases, assuming all 6 values in the table need to be tested. underlines indicate invalid values. Point out where the heuristics are contradicted and how to improve the test cases.

    SUT: consume(food, drink)

    Test case food drink
    TC1 bread water
    TC2 rice lava
    TC3 rock acid

    More

    Can explain test case design for use case based testing

    Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.

    To increase E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.

    Every test case adds to the cost of testing. In some systems, a single test case can cost thousands of dollars e.g. on-field testing of flight-control software. Therefore, test cases need to be designed to make the best use of testing resources. In particular:

    • Testing should be effective i.e., it finds a high percentage of existing bugs e.g., a set of test cases that finds 60 defects is more effective than a set that finds only 30 defects in the same system.

    • Testing should be efficient i.e., it has a high rate of success (bugs found/test cases) a set of 20 test cases that finds 8 defects is more efficient than another set of 40 test cases that finds the same 8 defects.

    For testing to be Efficient and EffectiveE&E, each new test we add should be targeting a potential fault that is not already targeted by existing test cases. There are test case design techniques that can help us improve E&E of testing.

    Quality Assurance → Testing → Exploratory and Scripted Testing →

    What

    Here are two alternative approaches to testing a software: Scripted testing and Exploratory testing

    1. Scripted testing: First write a set of test cases based on the expected behavior of the SUT, and then perform testing based on that set of test cases.

    2. Exploratory testing: Devise test cases on-the-fly, creating new test cases based on the results of the past test cases.

    Exploratory testing is ‘the simultaneous learning, test design, and test execution’ [source: bach-et-explained] whereby the nature of the follow-up test case is decided based on the behavior of the previous test cases. In other words, running the system and trying out various operations. It is called exploratory testing because testing is driven by observations during testing. Exploratory testing usually starts with areas identified as error-prone, based on the tester’s past experience with similar systems. One tends to conduct more tests for those operations where more faults are found.

    Here is an example thought process behind a segment of an exploratory testing session:

    “Hmm... looks like feature x is broken. This usually means feature n and k could be broken too; we need to look at them soon. But before that, let us give a good test run to feature y because users can still use the product if feature y works, even if x doesn’t work. Now, if feature y doesn’t work 100%, we have a major problem and this has to be made known to the development team sooner rather than later...”

    Exploratory testing is also known as reactive testing, error guessing technique, attack-based testing, and bug hunting.

    Exploratory Testing Explained, an online article by James Bach -- James Bach is an industry thought leader in software testing).

    Scripted testing requires tests to be written in a scripting language; Manual testing is called exploratory testing.

    A) False

    Explanation: “Scripted” means test cases are predetermined. They need not be an executable script. However, exploratory testing is usually manual.

    Which testing technique is better?

    (e)

    Explain the concept of exploratory testing using Minesweeper as an example.

    When we test the Minesweeper by simply playing it in various ways, especially trying out those that are likely to be buggy, that would be exploratory testing.

    Recap

    Can combine test case design techniques

    Assume students are given matriculation number according to the following format:

    [Faculty Alphabet] [Gender Alphabet] [Serial Number] [Check Alphabet]

    E.g. CF1234X

    The valid value(s) for each part of the matriculation number is given below:

    Faculty Alphabet:

    • Single capital alphabet
    • Only 'C' to 'G' are valid

    Gender Alphabet:

    • Single capital alphabet
    • Either 'F' or 'M' only

    Serial Number:

    • 4-digits number
    • From 1000 to 9999 only

    Check Alphabet:

    • Single capital alphabet
    • Only 'K', 'H', 'S', 'X' and 'Z' are valid

    Assume you are testing the operation isValidMatric(String matric):boolen. Identify equivalence partitions and boundary values for the matriculation number.

    String length: (less than 7 characters), (7 characters), (more than 7 characters)

    For those with 7 characters,

    • [Faculty Alphabet]: (‘C’, ‘G’), (‘c’, ‘g’), (any other character)
    • [Gender Alphabet]: (‘F’, ‘M’), (‘f’, ‘m’), (any other character)
    • [Serial Number]: (1000-9999), (0000-0999), (any other 4- characters string)
    • [Check Alphabet]: ('K', 'H', 'S', ‘X’, 'Z'), ('k', 'h', ’s’, ‘x’, 'z'), (any other character)

    Identify a set of equivalence partitions for testing isValidDate(String date) method. The method checks if the parameter date is a day that falls in the period 1880/01/01 to 2030/12/31 (both inclusive). The date is given in the format yyyy/mm/dd.

    Initial partitions: [null] [10 characters long] [shorter than 10 characters] [longer than 10 characters]

    For 10-character strings:

    • c1-c4: [not an integer] [less than 1880] [1880-2030 excluding leap years] [leap years within 1880-2030 period] [2030-9999]
    • c5: [‘/’] [not ‘/’]
    • c6-c7: [not an integer] [less than 1] [2] [31-day months: 1,3,5, 7,8, 10,12] [30-day months: 4,6,9,11] [13-99]
    • c8: [‘/’] [ not ‘/’]
    • c9-c10: [not an integer] [less than 1] [1-28] [29] [30] [31] [more than 31]

    In practice, we often use ‘trusted’ library functions (e.g. those that come with the Java JDK or .NET framework) to convert strings into dates. In such cases, our testing need not be as thorough as those suggested by the above analysis. However, this kind of thorough testing is required if you are the person implementing such a trusted component.

    Given below is the overview of the method dispatch(Resource, Task), from an emergency management system (e.g. a system used by those who handle emergency calls from the public about incidents such as fires, possible burglaries, domestic disturbances, etc.). A task might need multiple resources of multiple types. For example, the task ‘fire at Clementi MRT’ might need two fire engines and one ambulance.

    • dispatch(Resource r, Task t):void
    • Overview: This method dispatches the Resource r to the Task t. Since this can dispatch only one resource, it needs to be used multiple times should the task need multiple resources.

    Imagine you are designing test cases to test the method dispatch(Resource,Task). Taking into account equivalence partitions and boundary values, which different inputs will you combine to test the method?

    Test input for r Test input for t
    • A resource required by the task
    • A resource not required by the task
    • A resource already dispatched for another task
    • null
    • A fully dispatched task
    • A task requiring one more resource
    • A task with no resource dispatched
    Considering the resource types required
    • A task requiring only one type of resources
    • A task requiring multiple types of resource
    • null

    Given below is an operation description taken from a restaurant booking system. Use equivalence partitions and boundary value analysis technique to design a set of test cases for it.

    • boolean transferTable (Booking b, Table t)
    • Description: Used to transfer a Booking b to Table t, if t has enough room.
    • Preconditions: t has room for b , b.getTable() != t
    • Postconditions: b.getTable() == t

    Equivalence partitions

    • Booking:

      • Invalid: null, not null and b.getTable==t
      • Valid: not null and b.getTable != t
    • Table:

      • Invalid: null, not vacant, vacant but doesn’t have enough room,
      • Valid: vacant and has enough room.

    Boundary values:

    • Booking:

      • Invalid: null, not null and b.getTable==t
      • Valid:not null and b.getTable != t
    • Table:

      • Invalid: null, not vacant, (booking size == table size + 1)
      • Valid: (booking size == table size), (booking size == table size-1)

    Test cases:

    Test case Booking Table
    1 null Any valid
    2 not null and b.getTable==t Any valid
    3 Any valid null
    4 Any valid not vacant
    5 Any valid (booking size == table size + 1)
    6 Any valid (booking size == table size)
    7 Any valid (booking size == table size - 1)

    Note: We can use Bookings of different sizes for different test cases so that we increase the chance of finding bugs. If there is a minimum and maximum booking size, we should include them in those test cases.

    Assume you are testing the add(Item) method specified below.

    Assume i to be the Item being added.

    Preconditions:

    • i != null [throws InvalidItemException if i == null ]
    • contains(i) == false [throws DuplicateItemException if contains(i) == true]
    • count() < 10 [throws ListFullException if count() == 10]

    Postconditions:

    • contains(i) == true;
    • new count() == old count()+1

    Invariants: (an “invariant” is true before and after the method invocation).

    • 0 < = count() < = 10

    (a) What are the equivalence partitions relevant to testing the add(Item) method?

    (b) What are the boundary and non-boundary values you will use to test the add(Item) method?

    (c) Design a set of test cases to test the add(Item) method by considering the equivalence partitions and boundary values from your answers to (a) and (b) above.

    (a)

    i: i != null, i == null list: contains(i)==true, contains(i)==false, count() < 10, count() == 10 list == null should NOT be considered.

    (b)

    list: count() == 0, count() == 9, count() == 10; count() == [1|2|3|4|5|6|7|8] (1 preferred)

    (c)

    Use equivalence partitions and boundary values to choose test inputs for testing the setWife operation in the Man class.

    Partitioning ‘married’ as ‘to same woman’ and ‘to different woman’ seems redundant at first. Arguments for having it:

    • The behavior (e.g. the error message shown) may be different in those two situations.
    • The ‘to same woman’ partition has a risk of misunderstanding between developer and user. For example, the developer might think it is OK to ignore the request while the users might expect to see an error message.

    If you download a pre-release version of a new game (executable file) to do a test drive and submit bug reports, you would be doing

    • a. white-box exploratory testing
    • b. gray-box beta testing
    • c. black-box exploratory testing
    • d. black-box scripted testing
    • a. white-box exploratory testing
    • b. gray-box beta testing
    • c. black-box exploratory testing
    • d. black-box scripted testing

    Explanation: Since it is an executable only, you are unlikely to have a knowledge of its internal workings. Therefore, your testing is most likely to be black-box testing. A ‘test drive’ sounds like exploratory testing rather than scripted testing.

    We can apply equivalence partitioning technique in white-box fashion.

    • a. True
    • b. False
    • a. True
    • b. False

    Explanation: In fact, when we know about how the SUT is implemented, we can make more informed decisions about partitioning test input into equivalence partitions. White-box equivalence partitioning is less speculative than black-box equivalence partitioning.

    1. What are the EPs for the parameter day of this method?
      /**
      * Returns true if the three values represent a valid day
      */
      boolean isValidDay(int year, int month, int day){

      }
    2. What are the boundary values for the parameter day in the question above?