Let’s assume for the moment that you’re writing a Perl module or application. You’d like to maintain some level of software quality (or kwalitee), so you’re writing a suite of test scripts. Whether you’re writing them first (good for you for practicing test-​driven development!) or the application code is already there, you’ll probably be reaching for Test::Simple, Test::More, or one of the Test2::Suite bundles. With the latter two you’re immediately confronted with a choice: do you count up the number of tests into a plan, or do you forsake that in favor of leaving a done_testing() call at the end of your test script(s)?

There are good arguments for both approaches. When you first start, you probably have no idea how many tests your scripts will contain. After all, a test script can be a useful tool for designing a module’s interface by writing example code that will use it. Your exploratory code would be written as if the module or application was already done, testing it in the way you’d like it to work. Not declaring a plan makes perfect sense in this case; just put done_testing() at the end and get back to defining your tests.

You don’t have that option when using Test::Simple, of course—it’s so basic it only has one function (ok()), and you have to pre-​declare how many tests you plan to run when useing the module, like so:

use Test::Simple tests => 23;

Test::More also supports this form of plan, or you can opt to use its plan function to state the number of tests in your script or subtest. With Test2 you have to use plan. Either way, the plan acts as a sort of meta-​test, making sure that you executed exactly what you intended: no more, no less. While there are situations where it’s not possible to predict how many times a given set of tests should run, I would highly suggest that in all other cases you should clean up” your tests and declare a plan. Later on, if you add or remove tests you’ll immediately be aware that something has changed and it’s time to tally up a new plan.

What about other Perl testing frameworks? They can use plans, too. Here are two examples:

Thoughts? Does declaring a test plan make writing tests too inflexible? Does not having a plan encourage bad behavior? Tell me what you think in the comments below.

Yesterday’s pair programming session had Gábor Szabó and I thrashing around for a bit trying to figure out how to get test coverage statistics for the application. The Devel::Cover documentation lists how to run the module several ways, but it doesn’t exactly describe how to run prove by itself rather than running a Makefiles tests. I worked out how to do it today, and with the Baughs’ help on Twitter I worked out a few more methods.

All examples below use the bash or zsh command shells and were tested on macOS Catalina 10.15.7 running zsh 5.7.1 and Perl 5.32.1. If you’re using something very different (e.g., Microsoft Windows’ CMD or PowerShell), you may have to set environment variables differently.

Ad-​hoc test coverage

If all you want to do is run one shell command, here it is:

$ prove -vlre 'perl -MDevel::Cover -Ilib' t

This takes advantage of proves --exec option (abbreviated as -e) to run a different executable for tests. It recursively (-r) runs all your tests verbosely (-v) from the t directory while loading your application’s libraries (-l), while the perl executable uses (-M) Devel::Cover and the lib subdirectory. I use a similar technique when debugging tests.

$ HARNESS_PERL_SWITCHES=-MDevel::Cover prove -vlr t

This does almost the same thing as above without running a different executable. It sets Test::HarnessHARNESS_PERL_SWITCHES environment variable for the duration of the prove command. You won’t get the text output of your test coverage at the end, though, and will still have to run Devel::Covers cover command to both see the coverage and generate web pages.

In a dedicated test session, window, or tab

If you have a terminal session, window, or tab dedicated solely to running your tests, set one of the environment variables above for that session:

$ export HARNESS_PERL_SWITCHES=-MDevel::Cover

Now all of your test scripts will pick up that option. You can add more options by enclosing the environment variable’s value in 'quotes'. For example, you might also want to load Devel::NYTProf for code profiling:

$ export HARNESS_PERL_SWITCHES='-MDevel::Cover -MDevel::NYTProf'

Why not PERL5OPT?

Setting the PERL5OPT environment variable also sets options for the perl running prove, which means that your test coverage, profiling, etc. will pick up proves execution as well as your test scripts.

What about yath?

I don’t know for sure; I don’t use the Test2 suite. But it looks like it has a --cover option for loading and passing option to Devel::Cover.

In February I wrote an article surveying exception handling in Perl, recommending that developers use Test::Exception to make sure their code behaves as expected. A commenter on Reddit suggested I check out Test::Fatal as an alternative. What advantages does it hold over Test::Exception?

  • It only exports one function compared to Test::Exception’s four: exception, which you can then use with the full suite of regular Test::More functions as well as other testing libraries such as Test::Deep.
  • It doesn’t override the caller function or use Sub::Uplevel to hide your test blocks from the call stack, so if your exception returns a stack trace you’ll get output from the test as well as the thing throwing the exception. The author considers this a feature since Sub::Uplevel is twitchy.”

To ease porting, Test::Fatal also includes two functions, dies_ok and lives_ok, replacing Test::Exception’s functions of the same names. dies_ok does not provide the exception thrown, though, so if you’re testing that you’ll need to use exception along with a TAP-emitting function like is() or like().

And that’s it! Either is a valid choice; it comes down to whether you prefer one approach over another. Test::Exception is also included as part of Test::Mosts requirements, so if you’re using the latter to reduce boilerplate you’ll be getting the former.

Postscript:

I’d be remiss if I didn’t also mention Test2::Tools::Exception, which is the preferred way to test exceptions using the Test2 framework. If you’re using Test2, ignore all the above and go straight to Test2::Tools::Exception.

Failure is a universal truth of computers. Files fail to open, web pages fail to load, programs fail to install, messages fail to arrive. As a developer you have no choice but to work in a seemingly hostile environment in which bugs and errors lurk around every corner.

Hopefully you find and fix the bugs during development and testing, but even with all bugs squashed exceptional conditions can occur. It’s your job as a Perl developer to use the tools available to you to handle these exceptions. Here are a few of them.

eval, die and $EVAL_ERROR ($@) (updated)

Perl has a primitive but effective mechanism for running code that may fail called eval. It runs either a string or block of Perl code, trapping any errors so that the enclosing program doesn’t crash. It’s your job then to ignore or handle the error; eval will return undef (or an empty list in list context) and set the magic variable $@ to the error string. (You can spell that $EVAL_ERROR if you use the English module, which you probably should to allow for more readable code.) Here’s a contrived example:

use English;

eval { $foo / 0; 1 }
  or warn "tried to divide by zero: $EVAL_ERROR";

(Why the 1 at the end of the block? It forces the eval to return true if it succeeds; the or condition is executed if it returns false.)

What if you want to purposefully cause an exception, so that an enclosing eval (possibly several layers up) can handle it? You use die:

use English;

eval { process_file('foo.txt'); 1 }
  or warn "couldn't process file: $EVAL_ERROR";

sub process_file {
    my $file = shift;
    open my $fh, '<', $file
      or die "couldn't read $file: $OS_ERROR";

    ... # do something with $fh
}

It’s worth repeating that as a statement: You use exceptions so that enclosing code can decide how to handle the error. Contrast this with simply handling a function’s return value at the time it’s executed: except in the simplest of scripts, that part of the code likely has no idea what the error means to the rest of the application or how to best handle the problem.

autodie

Since many of Perl’s built-​in functions (like open) return false or other values on failure, it can be tedious and error-​prone to make sure that all of them report problems as exceptions. Enter autodie, which will helpfully replace the functions you choose with equivalents that throw exceptions. Introduced in Perl 5.10.1, it only affects the enclosing code block, and even goes so far as to set $EVAL_ERROR to an object that can be queried for more detail. Here’s an example:

use English;
use autodie; # defaults to everything but system and exec

eval { open my $fh, '<', 'foo.txt'; 1 } or do {
    if ($EVAL_ERROR
      and $EVAL_ERROR->isa('autodie::exception') {
        warn 'Error from open'
          if $EVAL_ERROR->matches('open');
        warn 'I/O error'
          if $EVAL_ERROR->matches(':io');
    }
    elsif ($EVAL_ERROR) {
        warn "Something else went wrong: $EVAL_ERROR";
    }
};

try and catch

If you’re familiar with other programming languages, you’re probably looking for syntax like try and catch for your exception needs. The good news is that it’s coming in Perl 5.34 thanks to the ever-​productive Paul LeoNerd” Evans; the better news is that you can use it today with his Feature::Compat::Try module, itself a distillation of his popular Syntax::Keyword::Try. Here’s an example:

use English;
use autodie;
use Feature::Compat::Try;

sub foo {
    try {
        attempt_a_thing();
        return 'success!';
    }
    catch ($exception) {
        return "failure: $exception"
          if not $exception->isa('autodie::exception');

        return 'failed in ' . $exception->function
          . ' line '        . $exception->line
          . ' called with '
          . join ', ', @{$exception->args};
    }
}

Note that autodie and Feature::Compat::Try are complementary and can be used together; also note that unlike an eval block, you can return from the enclosing function in a try block.

The underlying Syntax::Keyword::Try module has even more options like a finally block and a couple experimental features. I now prefer it to other modules that implement try/​catch syntax like Try::Tiny and TryCatch (even though we use Try::Tiny at work). If all you need is the basic syntax above, using Feature::Compat::Try will get you used to the semantics that are coming in the next version of Perl.

Other exception modules (updated)

autodie is nice, and some other modules and frameworks implement their own exception classes, but what if you want some help defining your own? After all, an error string can only convey so much information, may be difficult to parse, and may need to change as business requirements change.

Although CPAN has the popular Exception::Class module, its author Dave Rolsky recommends that you use Throwable if you’re using Moose or Moo. If you’re rolling your own objects, use Throwable::Error.

Using Throwable couldn’t be simpler:

package Foo;

use Moo;
with 'Throwable';

has message => (is => 'ro');

... # later...

package main; 
Foo->throw( {message => 'something went wrong'} );

And it comes with Throwable::Error, which you can subclass to get several useful methods:

package Local::My::Error;
use parent 'Throwable::Error';

... # later...

package main;
use Feature::Compat::Try;

try {
    Local::My::Error->throw('something bad');
}
catch ($exception) {
    warn $exception->stack_trace->as_string;
}

(That stack_trace attribute comes courtesy of the StackTrace::Auto role composed into Throwable::Error. Moo and Moose users should simply compose it into their classes to get it.)

Testing exceptions with Test::Exception

Inevitably bugs will creep in to your code, and automated tests are one of the main weapons in a developer’s arsenal against them. Use Test::Exception when writing tests against code that emits exceptions to see whether it behaves as expected:

use English;
use Test::More;
use Test::Exception;

...

throws_ok(sub { $foo->method(42) }, qr/error 42/,
  'method throws an error when it gets 42');
throws_ok(sub { $foo->method(57) }, 'My::Exception::Class',
  'method throws the right exception class');

dies_ok(sub { $bar->method() }, 'method died, no params');

lives_and(sub { is($baz->method(17), 17) },
  'method ran without exception, returned right value'); 

throws_ok(sub { $qux->process('nonexistent_file.txt') },
  'autodie::exception', # hey look, it's autodie again
  'got an autodie exception',
);
my $exception = $EVAL_ERROR;
SKIP: {
    skip 'no autodie exception thrown', 1
      unless $exception
      and $exception->isa('autodie::exception');
    ok($exception->match(':socket'),
      'was a socket error:' . $exception->errno);
}

done_testing();

Note that Test::Exceptions functions don’t mess with $EVAL_ERROR, so you’re free to check its value right after you call it.

Documenting errors and exceptions

If I can leave you with one message, it’s this: Please document every error and exception your code produces, preferably in a place and language that the end-​user can understand. The DIAGNOSTICS section of your documentation (you are writing documentation, right, not just code comments?) is a great candidate. You can model this section after the perldiag manual page, which goes into great detail about many of the error messages generated by Perl itself.

(A previous version of this article did not note that one should make sure a successful eval returns true, and incorrectly stated that Class::Exception and Throwable were deprecated due to a bug in the MetaCPAN web site. Thanks to Dan Book for the corrections.)