As we continue developing, we constantly have to keep testing our seq
program to see what is already working and what is not. We also need to make sure that changes that we make don’t end up breaking something that was working.
There are different kinds of tests. In the main Odin course there will be a longer discussion on testing, different types of tests and the situations in which they would be appropriate. Here I’m going to assume that either you know a bit about testing or are willing to do a bit of research on your own.
Since we are writing a command-line utility that prints out to the terminal (stdout and stderr). As a consequence, instead of testing individual procedures, we are more interested in testing the program as a whole.
One way we can do that is to compare the program output with some “correct” output. We could get this correct output by running the original seq
command which you already have, at least if you are running something like BSD, Linux or macOS.
We could of course write our tester program in Odin itself, but since we are going to run commands and study the output, a shell script or even a perl script seems ideally suited for this task.
Here is my idea: we will run both the original seq and our version of the command with the same command-line arguments. We will redirect both stdout and stderr to files, one for the original and one for our version.
Then we will compare this files using the diff
command. If they are different in any way, we have failed that test. If, on the other hand, there are no differences between neither the two stdout files nor the two stderr files, then that test has been a success.
I chose to write a shell script (in bash) to run the tests. If you don’t know shell scripting I do recommend that you learn it. There are many good resources out there.
|
|
The tests themselves are on lines 25-38. Make sure you leave that empty line on line 25 as that is going to test seq
without any command-line arguments.
Running it on the most up-to-date version of our code, I get this:
$ bash test.sh
Test 1: : success
Test 2: 10: success
Test 3: 10 20: success
Test 4: 20 2 50: success
Test 5: 100 -2 70: success
Test 6: 1 -1 10: success
Test 7: 10 1 1: success
Test 8: 1 1.5 7: fail
Test 9: 17 -1.3 2: fail
Test 10: -f "%10.5f" 10: fail
Test 11: -s: 1 15: fail
Test 12: -w 24 -3 8: fail
Test 13: --help: fail
Test 14: --version: fail
Failed 7 out of 15 tests.
It’s obvious that there is still work to be done. From now on, we can let the test results guide us to what needs to be done.
I could also have printed out the output of the diff
commands for failed tests. I prefer to just get a summary. It’s simple enough to manually run a failed test that we want to look more into. We can always come back and change that behavior later.
If you remember that during the requirments gathering I stated that we will not be using the same output as the original for the --help
and --version
options, you will no doubt also have seen that those tests are always going to fail. Don’t worry, we’ll get to that.
Now that we have a tool for testing our program, let’s go on and start fixing failed tests.