Let’s assume for the moment that you’re writ­ing a Perl mod­ule or appli­ca­tion. You’d like to main­tain some lev­el of soft­ware qual­i­ty (or kwali­tee), so you’re writ­ing a suite of test scripts. Whether you’re writ­ing them first (good for you for prac­tic­ing test-​driven devel­op­ment!) or the appli­ca­tion code is already there, you’ll prob­a­bly be reach­ing for Test::Simple, Test::More, or one of the Test2::Suite bun­dles. With the lat­ter two you’re imme­di­ate­ly con­front­ed with a choice: do you count up the num­ber of tests into a plan, or do you for­sake that in favor of leav­ing a done_testing() call at the end of your test script(s)?

There are good argu­ments for both approach­es. When you first start, you prob­a­bly have no idea how many tests your scripts will con­tain. After all, a test script can be a use­ful tool for design­ing a mod­ule’s inter­face by writ­ing exam­ple code that will use it. Your explorato­ry code would be writ­ten as if the mod­ule or appli­ca­tion was already done, test­ing it in the way you’d like it to work. Not declar­ing a plan makes per­fect sense in this case; just put done_testing() at the end and get back to defin­ing your tests.

You don’t have that option when using Test::Simple, of course—it’s so basic it only has one func­tion (ok()), and you have to pre-​declare how many tests you plan to run when useing the mod­ule, like so:

use Test::Simple tests => 23;

Test::More also sup­ports this form of plan, or you can opt to use its plan func­tion to state the num­ber of tests in your script or subtest. With Test2 you have to use plan. Either way, the plan acts as a sort of meta-​test, mak­ing sure that you exe­cut­ed exact­ly what you intend­ed: no more, no less. While there are sit­u­a­tions where it’s not pos­si­ble to pre­dict how many times a giv­en set of tests should run, I would high­ly sug­gest that in all oth­er cas­es you should clean up” your tests and declare a plan. Later on, if you add or remove tests you’ll imme­di­ate­ly be aware that some­thing has changed and it’s time to tal­ly up a new plan.

What about oth­er Perl test­ing frame­works? They can use plans, too. Here are two examples:

Thoughts? Does declar­ing a test plan make writ­ing tests too inflex­i­ble? Does not hav­ing a plan encour­age bad behav­ior? Tell me what you think in the com­ments below.

9 thoughts on “Testing Perl: To plan or not to plan

  1. I always try to use a plan. Without it, some of the tests could get skipped with­out a notice. It’s hap­pened to me and the pain of recal­cu­lat­ing the plan on every test change is less. For more com­plex test files, I plan e.g. 10 + 12 + 1 with a com­ment explain­ing what each num­ber means.

  2. Tests need to be easy to write to encour­age peo­ple to
    actu­al­ly write them, any sim­pli­fi­ca­tion in that
    direc­tion desire­able. Myself, I always devel­op tests
    with­out a plan these days because it makes it eas­i­er to
    add more of them on the fly.

    I could see an argu­ment to add a plan lat­er in the
    process with the file is essen­tial­ly closed” and the
    count has become stable.

    (I also used to have my dev tools– emacs perlnow.el–
    set-​up to auto­mat­i­cal­ly revise the count. Personally, I
    quit using that fea­ture, but hypo­thet­i­cal­ly, that’d be
    anoth­er option to get an open count dur­ing development
    but a fixed (“frozen”?) one after shipping.)

  3. I used to be very insis­tent on a plan, but as I’ve used mod­ern Perl test­ing more I’ve found that the prob­lem of skipped tests is less of one than I expect­ed. At ZipRecruiter, the stan­dard is to use done_​testing(), and it seems to work pret­ty well.

  4. You have no door­way from the kitchen to the for­mal din­ing room? That’s going to make meals awk­ward, won’t it. I would blow away the walls sep­a­rat­ing kitchen=>dining, entrance=>dining and kitchen=> that square space con­nect­ed to the entrance, which seems to me to be a lot of square footage wasted.

    🙂

  5. done_​testing is the bal­anced way. Manual plan and no-​plan are the oppo­site extremes,

    done-​testing will out­put the plan, which is for­mu­lat­ed from how many tests ran. No-​plan means any­thing goes, and a spec­i­fied plan is a direct number.

    Done-​testing at the end of the test does insure the test com­pletes, so no need to wor­ry about ear­ly exits being missed. It is still pos­si­ble to use con­di­tion­als and end up with tests unin­ten­tion­al­ly skipped, but in my expe­ri­ence most peo­ple set the plan by run­ning with­out a plan, see­ing how many ran, then set­ting the num­ber, which will also miss the skipped tests. If you are not lit­er­al­ly man­u­al­ly count­ing asser­tions in the test then set­ting a plan and done-​testing are exact­ly the same, except set­ting the plan is more work.

    In my expe­ri­ence I have only need­ed to set a plan when I have tests that fork or run threads, in those cas­es done-​testing will not always catch things that are missed or an ear­ly exit from a child.

    -Chad Exodist’ Granum, main­tain­er of test-​simple and test-​more, author of Test2

Comments are closed.