tcltest

tcltest , a package distributed with Tcl, provides a framework for writing and executing program tests.

See Also

A look at the Tcl test suite with gcov
ycl test cleanup1
Used as -cleanup cleanup1 in a test, returns the current namespace to the state it was in before the test was run. Useful to ensure that tests don't inadvertently affect each other.

Alternatives

New Test Package
tcltap
A simple alternative for small tests.
TTXN
TclTest eXtended Notation.
Caius
Functional testing framework based on ITcl, Expect and Selenium.

Testimonials

VI : 2003-10-02. We at http://www.comit.com use Tcl (and Tk) extensively. We use Tcl for hardware testing in simulation (think multimillion-gate, multifunction asic verification). One of the major factors in our initial decision to use Tcl and our continuing to use Tcl is tcltest. We use most tcltest features including constraints and are very pleased with the easy configuration of tests and the reporting. I do have gripes, but relatively minor, and since it is pure Tcl, we can change it anyways..

Documentation

official reference
Getting Started with tcltest , Hai Vu, 2011-03-28
tcltest Part 2: Multi-Module Test Suite , Hai Vu, 2011-03-29
tcltest Part 3 – Include and Exclude Tests , Hai Vu, 2011-04-01
tcltest Part 4: Constraints , Hai Vu, 2011-04-03
tcltest part 5: Capture the Standard Output , Hai Vu, 2011-04-08
tcltest Part 6 – Test for Error Conditions , Hai Vu, 2011-04-12
tcltest Part 7 – Inexact Result Matching , Hai Vu, 2011-04-19
tcltest Part 8: Recursive Test Suites , Hai Vu, 2011-04-26
tcltest Part 9: Provides Exit Code , Hai Vu, 2013-09-13
Installing tcltest , Bryan Oakley, 2006
Getting started with tcltest , Bryan Oakley, 2006

Examples

Tcl interpreter tests
tcltest is used to test Tcl itself, and those tests make great examples.
How to write tcltest result values
Your first tcltests
tcllib
AK: Most modules in tcllib come with .test files using tcltest for regression testing.

Description

Version 2, distributed with Tcl since version 8.4, is much more flexible in the kind of matching it permits. See tcltest customMatch for some examples.

tcltest can also be used to create a customized test harness for an extension.

Scripts for test options like -setup, -body, and -cleanup are evaluated in the caller's scope. return exits the body. The last line of a test is often return or set. In this way, test is a control structure like if or while. Another aspect of the control structure behaviour of test is that -constraints makes execution of the test conditional.

Customizing the output format of cleanupTests is not very feasible at this time, as it is used as a programmatic interface for sub-processes in runAllTests.

Running tcltest

testing existing Tcl interpreter

To run the Tcl source code test suite against another installed Tcl interpreter:

tclsh all.tcl

accessing extra Tcltest extension commands

Some commands specific for testing internals are in the Tcltest package (note: not tcltest), which is not built by default. Tcltest is not (yet?) available as an extension, so it needs to be built into a special tcltest executable with:

$ make runtest
$ ./tcltest
% package require Tcltest
8.6.2
% llength [info commands test*]
85

These commands include testparser, mentioned elsewhere on the wiki (though tclparser provides an alternative as a loadable extension).

Run a Tcl test in verbose mode

A wonderful addition to this page would be instructions on how to run one particular test, using verbose mode, for the purpose of reporting test suite failures to a package developer.

DGP answers:

make TESTFLAGS='-verbose tpse -file safe.test' test

complex results

RHS: One way to handle tests that the -result option can't handle is to do something like:

test mytest-1.1 {
    Calling my proc should always return a list of at least length 3
} -body {
    set result [myProc a b c d e]
    # -result can't handle {>= llength 3}, so we test it here
    expr {[llength $result] >= 3}
} -result 1

I.e. perform the test for passing inside the actual test body, and have the -result be 1/0 if that test passed/failed.


customMatch can also be used to the same effect: Define a script that compares an actual result with a target result. Then use customMatch to "register" that script for the -match option. The script should return a boolean option indicating match or no match.


PYK 2015-04-09: One strategy for some complex tests is to set -result to the empty string, and then in the body of the test to add an error message to the result if some expectation isn't met. This has the advantage of providing immediately-visible feedback about what the problem might be.


[tcltest is a truly great and wonderful thing. We should make a point of explaining its virtues and uses.]

Non-zero Exit Status on Failure

tcltest by itself does not set the exit status of a script in order to indicate success or failure. $::tcltest::numTests(Failed) can be used to detect failures and respond to them. Callit before calling cleanupTests, since the latter resets the falue of $::tcltest::numTests(Failed).

Logging to a File While Displaying on stdout

The system program tail can be used to display test output that is also captured to a file:

exec tail -f --pid=[pid] $testlogfile >@stdout 2>@stderr
outputChannel $transcriptfile
errorChannel $transcriptfile

Discussion

[A Pythoneer has written an article--which CL can locate if helpful--illustrating use of PyUnit with Roman numeral conversion. In the absence of any other inspiration, maybe someone wants to steal that example ...]

RS, off page topic: See also Roman numbers for Tcl routines.


davidw 2003-10-03: I am doing some work to improve tcltest, specifically to give it an API so that you can programmatically fetch information about the test results. I would love to hear what sorts of features you would find useful - feel free to drop me email.


disneylogic: It would be useful to have a callback option, prefixed by "-command" perhaps, which appeals to an expression or proc to determine the correctness of a test. In lieu of demanding a real value to compare or abusing return codes, this would adjust 'tcltest' to perform more like SUnit in the Beck testing framework .

I encountered this when I was trying to write a routine to do sampling without replacement and could not specify a specific result to use to compare.

If there is already some way of doing this, please, please specify it here! I waded through the man page for a bit and couldn't find anything, particularly in the usage of 'test' section.

DGP: Can you provide an example of what a test would look like if there was a -command option for test ? Assume the reader knows nothing about either SUnit or Beck.

PYK 2023-07-14: Other options presented on this page answer this need, including the technique of making the final line in a test body an expr that returns 1 or 0.


LV: I have a programming itch and am wondering if tcltest will help me scratch this itch.

Problem: set up regression testing of a client/server command line and stdout/return code related pair of applications.

Currently, I'd love to find some help, tutorials, examples, etc. of how others are successfully doing this sort of thing with the result being test cases that are nearly trivial to read and write. I'd like to not have to teach people all of Tcl before they can write new test cases. Ideally, having a half dozen or so examples should provide enough context to write additional code.

I'd also like suggestions for best practices relating to this sort of use of tcltest.

Thanks!


fermulator: If a particular test spews out a bunch of debug information (i.e. 'test debug mode'), is there a way to redirect stdout, for a specific test, to /dev/null? I can't seem to find a way to hide stdout. It's basically:

test print_debug_true {
    Validate that when enabling "Debug Mode" in this class, that all prints
    work as expected (no errors are caught).
} -setup {
    # Enable debug mode
    $myObj setDebug true
} -body {
    # Try to run some basic methods with debug mode enabled.
    # If it fails, error out, else return PASS.
    if {[catch {$myObj doStuff} err]} {
        return $err
    }
    return PASS
} -result {PASS}

When I run the above test, stdout on the test output gets spammed with a whack of debug information, and it "skews" the high level view of which tests are running, passing, failing, etc.

pyk 2023-07-04: You might try outputChannel and errorChannel, whose argument, in contrast with the documentation, is a filename.