Debugging is hard.
When confronted with scripts that don't work, beginners often make seemingly random changes to the script, run it, look at the output and make another change. This isn't productive -- it wastes your time. We need to be organised.
The most important rule in debugging is: execute the file you are editing -- not some other copy that you have lying around somewhere. You can avoid editing one and running another if you have only one version of the script and it lives in $HOME/bin. ($HOME refers to your home directory.) Also, you execute it by typing its name (only), like this:.
$ myScript $
That means that these:
$ ./myScript $ sh myScript $ bash myScript $ . myScript $
won't do! They all bring in subtle differences compared with the simple method of executing the script.
Finally, we have the script open in an editor window
at the same time as we are testing it in an
That way, we are less likely to forget to save the file and re-run the
It is essential that, we have back-ups of
the script in $HOME/bin/BU so that we can go back to the previous
version simply by using our
If you skipped the previous section please go back to it. It isn't meant to sound patronising. After fifty years of programming I still sometimes find I have been running a different version of a programme to the one I am editing -- usually because I haven't followed the advice above.
There are two kinds of scripts that need debugging: those
which consist of a pipeline or a simple sequence of commands
and those with
complex control logic (
The first kind has a shape like this:
... ... ... ... ... ... ... ... ... ... ... ...
... ... ... | ... ... | ... ... ... ... | ... ... ...
The first is a sequence of statements and the second is a pipeline. I will call both the above simple scripts to differentiate them from scripts with complex logic.
The following techniques are handy:
by inserting this line:
after the shebang line. When the statement is executed, the shell starts echoing the script lines to standard output before executing them.
Then, when you get a
message about an
syntax error, you can see exactly
which of your several
commands was the one that caused the
Comment out lines of code with a
before the code.
For example, this script statement removes all the temporary files used:
rm t1 t2 t3 t4
If we change the line to:
# rm t1 t2 t3 t4
The files will be left behind for us to examine.
Display temporary files, before deleting them.
Then we won't have to
them one by one manually:
pr -tm t1 t2 t3 t4 | more
The advantage of
is that it can do many thin files at once.
Leave the script midway through its execution.
... ... ... < t1 > t2 ... ... < t2 > t1 ... ... ... ... < t1 > t2 ... ... ... < t2 > t1
... ... ... < t1 > t2 more t2 exit ... ... < t2 > t1 ... ... ... ... < t1 > t2 ... ... ... < t2 > t1
allowing us to see what was in "t2" before it got overwritten.
to see what is going down a pipe
... ... ... | ... ... | ... ... ... ... | ... ... ... |
... ... ... | ... ... | more exit ... ... ... ... | ... ... ... |
When the lines before the
are debugged, we can move the
extra lines down the script.
something rather than do something.
mv $file $newname
echo mv $file $newname
And turn this:
wc < $input > $output
echo "wc < $input > $output"
to see exactly what commands the script would have done. Note the weak quotes around the whole of the second example to show the variable values and avoid the input/output redirection.
Typically this kind of script crashes with:
yourScript: test: argument expected
yourScript: syntax error at line 6: `end of file' unexpected
neither of which tell you where the error occurred. (The first is often caused by an empty variable and the second is a missing or misplaced quotation mark.)
As before, the thing to do is enable statement tracing but the:
could be inserted later on in the script -- at the point at which you think tracing would be useful.
If needed, tracing can be stopped with:
All these techniques involve breaking problems into smaller ones. Smaller problems are easier to solve!