As we continue developing, we constantly have to keep testing our seq program to see what is already working and what is not. We also need to make sure that changes that we make don’t end up breaking something that was working.

There are different kinds of tests. In the main Odin course there will be a longer discussion on testing, different types of tests and the situations in which they would be appropriate. Here I’m going to assume that either you know a bit about testing or are willing to do a bit of research on your own.

Since we are writing a command-line utility that prints out to the terminal (stdout and stderr). As a consequence, instead of testing individual procedures, we are more interested in testing the program as a whole.

One way we can do that is to compare the program output with some “correct” output. We could get this correct output by running the original seq command which you already have, at least if you are running something like BSD, Linux or macOS.

We could of course write our tester program in Odin itself, but since we are going to run commands and study the output, a shell script or even a perl script seems ideally suited for this task.

Here is my idea: we will run both the original seq and our version of the command with the same command-line arguments. We will redirect both stdout and stderr to files, one for the original and one for our version.

Then we will compare this files using the diff command. If they are different in any way, we have failed that test. If, on the other hand, there are no differences between neither the two stdout files nor the two stderr files, then that test has been a success.

I chose to write a shell script (in bash) to run the tests. If you don’t know shell scripting I do recommend that you learn it. There are many good resources out there.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#! /bin/bash

testcount=1
failcount=0
while IFS= read -r line; do
    rm -f orig.out orig.err my.out my.err
    seq $line > orig.out 2> orig.err
    ./odin $line > my.out 2> my.err

    diff orig.out my.out > /dev/null 2>&1
    out=$?
    diff orig.out my.out > /dev/null 2>&1
    err=$?

    if [[ $our -ne 0 || $err -ne 0 ]]; then
	    result=fail
	    let "failcount+=1"
    else
	    result=success
    fi

    printf "Test %3d: %20s: %s\n" "$testcount" "$line" "$result"
    let "testcount+=1"
done <<EOF

10
10 20
20 2 50
100 -2 70
1 -1 10
10 1 1
1 1.5 7
17 -1.3 2
-f "%10.5f" 10
-s: 1 15
-w 24 -3 8
--help
--version
EOF

rm -f orig.out orig.err my.out my.err

echo Failed $failcount out of $testcount tests.

The tests themselves are on lines 25-38. Make sure you leave that empty line on line 25 as that is going to test seq without any command-line arguments.

Running it on the most up-to-date version of our code, I get this:

$ bash test.sh
Test   1:                     : success
Test   2:                   10: success
Test   3:                10 20: success
Test   4:              20 2 50: success
Test   5:            100 -2 70: success
Test   6:              1 -1 10: success
Test   7:               10 1 1: success
Test   8:              1 1.5 7: fail
Test   9:            17 -1.3 2: fail
Test  10:       -f "%10.5f" 10: fail
Test  11:             -s: 1 15: fail
Test  12:           -w 24 -3 8: fail
Test  13:               --help: fail
Test  14:            --version: fail
Failed 7 out of 15 tests.

It’s obvious that there is still work to be done. From now on, we can let the test results guide us to what needs to be done.

I could also have printed out the output of the diff commands for failed tests. I prefer to just get a summary. It’s simple enough to manually run a failed test that we want to look more into. We can always come back and change that behavior later.

If you remember that during the requirments gathering I stated that we will not be using the same output as the original for the --help and --version options, you will no doubt also have seen that those tests are always going to fail. Don’t worry, we’ll get to that.

Now that we have a tool for testing our program, let’s go on and start fixing failed tests.