chocolate bar and sugar cubes on a hand
What about My::Favorite::Module?

I men­tioned at the Ephemeral Miniconf last month that as soon as I write about one Perl mod­ule (or five), some­one inevitably brings up anoth­er (or sev­en) I’ve missed. And of course, it hap­pened again last week: no soon­er had I writ­ten in pass­ing that I was using Exception::Class than the denizens of the Libera Chat IRC chan­nel insist­ed I should use Throwable instead for defin­ing my excep­tions. (I’ve already blogged about var­i­ous ways of catch­ing excep­tions.)

Why Throwable? Aside from Exception::Class’s author rec­om­mend­ing it over his own work due to a nicer, more mod­ern inter­face,” Throwable is a Moo role, so it’s com­pos­able into class­es along with oth­er roles instead of muck­ing about with mul­ti­ple inher­i­tance. This means that if your excep­tions need to do some­thing reusable in your appli­ca­tion like log­ging, you can also con­sume a role that does that and not have so much dupli­cate code. (No, I’m not going to pick a favorite log­ging mod­ule; I’ll prob­a­bly get that wrong too.)

However, since Throwable is a role instead of a class, I would have to define sev­er­al addi­tion­al packages in my tiny mod­uli­no script from last week, one for each excep­tion class I want. The beau­ty of Exception::Class is its sim­ple declar­a­tive nature: just use it and pass a list of desired class names along with options for attrib­ut­es and what­not. What’s need­ed for sim­ple use cas­es like mine is a declar­a­tive syn­tax for defin­ing sev­er­al excep­tion class­es with­out the noise of mul­ti­ple packages.

Enter Throwable::SugarFactory, a mod­ule that enables you to do just that by adding an exception func­tion for declar­ing excep­tion class­es. (There’s also the similarly-​named Throwable::Factory; see the above dis­cus­sion about nev­er being able to cov­er everybody’s favorites.) The exception func­tion takes three argu­ments: the name of the desired excep­tion class as a string, a descrip­tion, and an option­al list of instruc­tions Moo uses to build the class. It might look some­thing like this:

package Local::My::Exceptions;
use Throwable::SugarFactory;

exception GenericError  => 'something bad happened';
exception DetailedError => 'something specific happened' =>
  ( has => [ message => ( is => 'ro' ) ] );

1;

Throwable::SugarFactory takes care of cre­at­ing con­struc­tor func­tions in Perl-​style snake_case as well as func­tions for detect­ing what kind of excep­tion is being caught, so you can use your new excep­tion library like this:

#!/usr/bin/env perl

use experimental qw(isa);
use Feature::Compat::Try;
use JSON::MaybeXS;
use Local::My::Exceptions;

try {
    die generic_error();
}
catch ($e) {
    warn 'whoops!';
}

try {
    die detailed_error( message => 'you got me' );
}
catch ($e) {
    die encode_json( $e->to_hash )
      if $e isa DetailedError and defined $e->message;
    $e->throw if $e->does('Throwable');
    die $e;
}

The above also demon­strates a cou­ple of oth­er Throwable::SugarFactory fea­tures. First, you get a to_hash method that returns a hash ref­er­ence of all excep­tion data, suit­able for seri­al­iz­ing to JSON. Second, you get all of Throwable’s meth­ods, includ­ing throw for re-​throwing exceptions. 

So where does this leave last week’s FOAAS.com mod­uli­no client demon­stra­tion of object mock­ing tests? With a lit­tle bit of rewrit­ing to define and then use our sweet­er excep­tion library, it looks like this. You can review for a descrip­tion of the rest of its workings.

#!/usr/bin/env perl

package Local::CallFOAAS::Exceptions;
use Throwable::SugarFactory;

BEGIN {
    exception NoMethodError =>
      'no matching WebService::FOAAS method' =>
      ( has => [ method => ( is => 'ro' ) ] );
    exception ServiceError =>
      'error from WebService::FOAAS' =>
      ( has => [ message => ( is => 'ro' ) ] );
}

package Local::CallFOAAS;  # this is a modulino
use Test2::V0;             # enables strict, warnings, utf8

# declare all the new stuff we're using
use feature qw(say state);
use experimental qw(isa postderef signatures);
use Feature::Compat::Try;
use Syntax::Construct qw(non-destructive-substitution);

use WebService::FOAAS ();
use Package::Stash;
BEGIN { Local::CallFOAAS::Exceptions->import() }

my $foaas = Package::Stash->new('WebService::FOAAS');

my $run_as =
    !!$ENV{CPANTEST}       ? 'test'
  : !defined scalar caller ? 'run'
  :                          undef;
__PACKAGE__->$run_as(@ARGV) if defined $run_as;

sub run ( $class, @args ) {
    try { say $class->call_method(@args) }
    catch ($e) {
        die 'No method ', $e->method, "\n"
          if $e isa NoMethodError;
        die 'Service error: ', $e->message, "\n"
          if $e isa ServiceError;
        die "$e\n";
    }
    return;
}

# Utilities

sub methods ($) {
    state @methods = sort map s/^foaas_(.+)/$1/r,
      grep /^foaas_/, $foaas->list_all_symbols('CODE');
    return @methods;
}

sub call_method ( $class, $method = '', @args ) {
    state %methods = map { $_ => 1 } $class->methods();
    die no_method_error( method => $method )
      unless $methods{$method};
    return do {
        try { $foaas->get_symbol("&$method")->(@args) }
        catch ($e) { die service_error( message => $e ) }
    };
}

# Testing

sub test ( $class, @ ) {
    state $stash = Package::Stash->new($class);
    state @tests = sort grep /^_test_/,
      $stash->list_all_symbols('CODE');

    for my $test (@tests) {
        subtest $test => sub {
            try { $class->$test() }
            catch ($e) { diag $e }
        };
    }
    done_testing();
    return;
}

sub _test_can ($class) {
    state @subs = qw(run call_method methods test);
    can_ok $class, \@subs, "can do: @subs";
    return;
}

sub _test_methods ($class) {
    my $mock = mock 'WebService::FOAAS' => ( track => 1 );

    for my $method ( $class->methods() ) {
        $mock->override( $method => 1 );

        ok lives { $class->call_method($method) },
          "$method lives";
        ok scalar $mock->sub_tracking->{$method}->@*,
          "$method called";
    }
    return;
}

sub _test_service_failure ($class) {
    my $mock = mock 'WebService::FOAAS';

    for my $method ( $class->methods() ) {
        $mock->override( $method => sub { die 'mocked' } );

        my $exception =
          dies { $class->call_method($method) };
        isa_ok $exception, [ServiceError],
          "$method throws ServiceError on failure";
        like $exception->message, qr/^mocked/,
          "correct error in $method exception";
    }
    return;
}

1;

[Updated, thanks to Dan Book, Karen Etheridge, and Bob Kleemann] The only goofy bit above is the need to put the exception calls in a BEGIN block and then explic­it­ly call BEGIN { Local::CallFOAAS::Exceptions->import() }. Since the two pack­ages are in the same file, I can’t do a use state­ment since the implied require would look for a cor­re­spond­ing file or entry in %INC. (You can get around this by mess­ing with %INC direct­ly or through a mod­ule like me::inlined that does that mess­ing for you, but for a single-​purpose mod­uli­no like this it’s fine.)


Let’s assume for the moment that you’re writ­ing a Perl mod­ule or appli­ca­tion. You’d like to main­tain some lev­el of soft­ware qual­i­ty (or kwali­tee), so you’re writ­ing a suite of test scripts. Whether you’re writ­ing them first (good for you for prac­tic­ing test-​driven devel­op­ment!) or the appli­ca­tion code is already there, you’ll prob­a­bly be reach­ing for Test::Simple, Test::More, or one of the Test2::Suite bun­dles. With the lat­ter two you’re imme­di­ate­ly con­front­ed with a choice: do you count up the num­ber of tests into a plan, or do you for­sake that in favor of leav­ing a done_testing() call at the end of your test script(s)?

There are good argu­ments for both approach­es. When you first start, you prob­a­bly have no idea how many tests your scripts will con­tain. After all, a test script can be a use­ful tool for design­ing a mod­ule’s inter­face by writ­ing exam­ple code that will use it. Your explorato­ry code would be writ­ten as if the mod­ule or appli­ca­tion was already done, test­ing it in the way you’d like it to work. Not declar­ing a plan makes per­fect sense in this case; just put done_testing() at the end and get back to defin­ing your tests.

You don’t have that option when using Test::Simple, of course—it’s so basic it only has one func­tion (ok()), and you have to pre-​declare how many tests you plan to run when useing the mod­ule, like so:

use Test::Simple tests => 23;

Test::More also sup­ports this form of plan, or you can opt to use its plan func­tion to state the num­ber of tests in your script or subtest. With Test2 you have to use plan. Either way, the plan acts as a sort of meta-​test, mak­ing sure that you exe­cut­ed exact­ly what you intend­ed: no more, no less. While there are sit­u­a­tions where it’s not pos­si­ble to pre­dict how many times a giv­en set of tests should run, I would high­ly sug­gest that in all oth­er cas­es you should clean up” your tests and declare a plan. Later on, if you add or remove tests you’ll imme­di­ate­ly be aware that some­thing has changed and it’s time to tal­ly up a new plan.

What about oth­er Perl test­ing frame­works? They can use plans, too. Here are two examples:

Thoughts? Does declar­ing a test plan make writ­ing tests too inflex­i­ble? Does not hav­ing a plan encour­age bad behav­ior? Tell me what you think in the com­ments below.

Yesterday’s pair pro­gram­ming ses­sion had Gábor Szabó and I thrash­ing around for a bit try­ing to fig­ure out how to get test cov­er­age sta­tis­tics for the appli­ca­tion. The Devel::Cover doc­u­men­ta­tion lists how to run the mod­ule sev­er­al ways, but it does­n’t exact­ly describe how to run prove by itself rather than run­ning a Makefiles tests. I worked out how to do it today, and with the Baughs’ help on Twitter I worked out a few more methods.

All exam­ples below use the bash or zsh com­mand shells and were test­ed on macOS Catalina 10.15.7 run­ning zsh 5.7.1 and Perl 5.32.1. If you’re using some­thing very dif­fer­ent (e.g., Microsoft Windows’ CMD or PowerShell), you may have to set envi­ron­ment vari­ables differently.

Ad-​hoc test coverage

If all you want to do is run one shell com­mand, here it is:

$ prove -vlre 'perl -MDevel::Cover -Ilib' t

This takes advan­tage of proves --exec option (abbre­vi­at­ed as -e) to run a dif­fer­ent exe­cutable for tests. It recur­sive­ly (-r) runs all your tests ver­bose­ly (-v) from the t direc­to­ry while load­ing your appli­ca­tion’s libraries (-l), while the perl exe­cutable uses (-M) Devel::Cover and the lib sub­di­rec­to­ry. I use a sim­i­lar tech­nique when debug­ging tests.

$ HARNESS_PERL_SWITCHES=-MDevel::Cover prove -vlr t

This does almost the same thing as above with­out run­ning a dif­fer­ent exe­cutable. It sets Test::HarnessHARNESS_PERL_SWITCHES envi­ron­ment vari­able for the dura­tion of the prove com­mand. You won’t get the text out­put of your test cov­er­age at the end, though, and will still have to run Devel::Covers cover com­mand to both see the cov­er­age and gen­er­ate web pages.

In a dedicated test session, window, or tab

If you have a ter­mi­nal ses­sion, win­dow, or tab ded­i­cat­ed sole­ly to run­ning your tests, set one of the envi­ron­ment vari­ables above for that session:

$ export HARNESS_PERL_SWITCHES=-MDevel::Cover

Now all of your test scripts will pick up that option. You can add more options by enclos­ing the envi­ron­ment vari­able’s val­ue in 'quotes'. For exam­ple, you might also want to load Devel::NYTProf for code profiling:

$ export HARNESS_PERL_SWITCHES='-MDevel::Cover -MDevel::NYTProf'

Why not PERL5OPT?

Setting the PERL5OPT envi­ron­ment vari­able also sets options for the perl run­ning prove, which means that your test cov­er­age, pro­fil­ing, etc. will pick up proves exe­cu­tion as well as your test scripts.

What about yath?

I don’t know for sure; I don’t use the Test2 suite. But it looks like it has a --cover option for load­ing and pass­ing option to Devel::Cover.