Let’s assume for the moment that you’re writing a Perl module or application. You’d like to maintain some level of software quality (or kwalitee), so you’re writing a suite of test scripts. Whether you’re writing them first (good for you for practicing test-​driven development!) or the application code is already there, you’ll probably be reaching for Test::Simple, Test::More, or one of the Test2::Suite bundles. With the latter two you’re immediately confronted with a choice: do you count up the number of tests into a plan, or do you forsake that in favor of leaving a done_testing() call at the end of your test script(s)?

There are good arguments for both approaches. When you first start, you probably have no idea how many tests your scripts will contain. After all, a test script can be a useful tool for designing a module’s interface by writing example code that will use it. Your exploratory code would be written as if the module or application was already done, testing it in the way you’d like it to work. Not declaring a plan makes perfect sense in this case; just put done_testing() at the end and get back to defining your tests.

You don’t have that option when using Test::Simple, of course—it’s so basic it only has one function (ok()), and you have to pre-​declare how many tests you plan to run when useing the module, like so:

use Test::Simple tests => 23;

Test::More also supports this form of plan, or you can opt to use its plan function to state the number of tests in your script or subtest. With Test2 you have to use plan. Either way, the plan acts as a sort of meta-​test, making sure that you executed exactly what you intended: no more, no less. While there are situations where it’s not possible to predict how many times a given set of tests should run, I would highly suggest that in all other cases you should clean up” your tests and declare a plan. Later on, if you add or remove tests you’ll immediately be aware that something has changed and it’s time to tally up a new plan.

What about other Perl testing frameworks? They can use plans, too. Here are two examples:

Thoughts? Does declaring a test plan make writing tests too inflexible? Does not having a plan encourage bad behavior? Tell me what you think in the comments below.

9 thoughts on “Testing Perl: To plan or not to plan

  1. I always try to use a plan. Without it, some of the tests could get skipped without a notice. It’s happened to me and the pain of recalculating the plan on every test change is less. For more complex test files, I plan e.g. 10 + 12 + 1 with a comment explaining what each number means.

  2. Tests need to be easy to write to encourage people to
    actually write them, any simplification in that
    direction desireable. Myself, I always develop tests
    without a plan these days because it makes it easier to
    add more of them on the fly.

    I could see an argument to add a plan later in the
    process with the file is essentially closed” and the
    count has become stable.

    (I also used to have my dev tools– emacs perlnow.el–
    set-​up to automatically revise the count. Personally, I
    quit using that feature, but hypothetically, that’d be
    another option to get an open count during development
    but a fixed (“frozen”?) one after shipping.)

  3. I used to be very insistent on a plan, but as I’ve used modern Perl testing more I’ve found that the problem of skipped tests is less of one than I expected. At ZipRecruiter, the standard is to use done_​testing(), and it seems to work pretty well.

  4. You have no doorway from the kitchen to the formal dining room? That’s going to make meals awkward, won’t it. I would blow away the walls separating kitchen=>dining, entrance=>dining and kitchen=> that square space connected to the entrance, which seems to me to be a lot of square footage wasted.

    🙂

  5. done_​testing is the balanced way. Manual plan and no-​plan are the opposite extremes,

    done-​testing will output the plan, which is formulated from how many tests ran. No-​plan means anything goes, and a specified plan is a direct number.

    Done-​testing at the end of the test does insure the test completes, so no need to worry about early exits being missed. It is still possible to use conditionals and end up with tests unintentionally skipped, but in my experience most people set the plan by running without a plan, seeing how many ran, then setting the number, which will also miss the skipped tests. If you are not literally manually counting assertions in the test then setting a plan and done-​testing are exactly the same, except setting the plan is more work.

    In my experience I have only needed to set a plan when I have tests that fork or run threads, in those cases done-​testing will not always catch things that are missed or an early exit from a child.

    -Chad Exodist’ Granum, maintainer of test-​simple and test-​more, author of Test2

Comments are closed.