Updated 2008-07-25 15:40:25 by dkf

RBR - I recently found myself needed to use subprocesses in Tcl in a non-trivial way, i.e., more than just running a process and grabbing its output. What I started looking for was something that gave me the power and flexibility I was accustomed to in C programs but I did not want to have to write the code in C. It took a bit of digging, but Tcl is perfectly capable of doing this.

The Tcl Core Way

The first way to do this is using the Tcl Core. Here is a short example showing event-oriented output handling from a subprocess:
    #! /usr/bin/tclsh

    proc eventloopStart {} {
	global forever;

	set forever 1;

	if {[info commands tk] != {}} {
	    tkwait variable forever;
	} else {
	    vwait forever;
	}

	return;
    }

    proc eventloopStop {} {
	global forever;

	set forever 0;

	return;
    }

    proc timeoutStart {fh pid timeout} {
	# create a timeout handler to be executed after the given timeout
	#
	return [after $timeout [list timeoutExec $fh $pid $timeout]];
    }

    proc timeoutExec {fh pid timeout} {
	puts stderr [set msg [format \
	    "\aclosing (without capturing exit status) pipe to piped child process %ld after timeout of %ldms" \
	    $pid \
	    $timeout \
	]];

	# switch off blocking and close the pipe to the piped client process
	#
	#    I don't know if it is possible, after already trying to close
	#    the pipe to the client process
	#
	fconfigure $fh -blocking 0;

	close $fh;

	set ::errorCode [list PIPE ETIMEOUT $msg];

	return;
    }

    proc timeoutAbort {timeoutId} {
	# abort the timeout handler
	#
	#    after doesn't throw an error if the event is not valid anymore
	#
	after cancel $timeoutId;

	return;
    }

    proc logWrite {fh pid timeout} {
	if {[gets $fh line] > 0} {
	    puts "[clock format [clock seconds] -format {%b %d %T}] $line";
	} elseif {[eof $fh]} {
	    # capturing the exit status of the piped "child"-process
	    #
	    #    without blocking, the close command won't wait until the
	    #    client finished and won't be able to tell if the client
	    #    cause an error while closing the pipe!
	    #
	    #    so the close command has to wait until the child process
	    #    decides to exit - there is no way to force the exit of the
	    #    child process - only with a platform specific kill command
	    #
	    #    to prevent to wait forever for the exit of the child process
	    #    a timeout handler will be created and aborted if not used
	    #
	    fconfigure $fh -blocking 1;

	    set timeoutId [timeoutStart $fh $pid $timeout];

	    if {[catch {close $fh;} result]} {
		if {($::errorCode != {}) && ($::errorCode != "NONE")} {
		    puts stderr [format \
			"\apiped child process %d exited abnormaly:\n\n%s\n" \
			$pid \
			$::errorCode \
		    ];
		}
	    }

	    timeoutAbort $timeoutId;

	    eventloopStop;
	}
    }

    set fh      [open "|ls -l /tmp 2>@ stdout" r];
    set pid     [pid $fh];
    set timeout 60000; # 60seconds

    puts "pided child process pid $pid"

    fconfigure $fh -buffering line -blocking 0;

    fileevent $fh readable [list logWrite $fh $pid $timeout];

    eventloopStart;

The major points of the above are that the process is started with stderr redirected into stdout, the pid is captured, an output handler captures data as available leaving the script to free to handle any other events.

RBR - If you know how to capture the exit status of the above, please add that, and feel free to remove this comment afterwards. Also, do I need to do line buffering or turn off blocking to insure I don't block on gets and that I only get full lines?

male - 2004/01/13: I added the code for capturing the exit status.

DLR To avoid blocking forever in case process does not exits. Maybe add an "after $timeout timeoutWhileClosing" Where timeoutWhileClosing is a proc that will close the pipe, maybe log a message somewhere about it and set ::forever to 1

male - 2004/01/14: I added the code for a timeout handler. But ... I don't know, if working on a channel announced to be closed is possible! To throw an error is perhaps the only way to handle the timeout.

male - 2004/01/14: With the code below I tested the possibility to use a timeout handler, while the close command is waiting for a blocked piped child process to exit:
    set f [open |tclsh.exe r];
    after 1000 "puts {blocked close};fconfigure $f -blocking 1; close $f; set forever 1";
    after 2000 "puts {non-blocked close after the blocked close};fconfigure $f -blocking 0; close $f; set forever 2";
    vwait forever;

Result: It is not possible, because the close command really blocks the whole application and its event loop, so the after command to instantiate the timeout handler is useless!

Perhaps it would be very useful to extend the fconfigure options with a timeout option for command channels! This option would accept an integer value in milliseconds (like after does) and would cause an tcl error if after an action is started on the configured command channel this action couldn't succeed or fail in time!

Usage example:
    # opening the pipe/command channel
    #
    set cc [open |tclsh.exe r];

    # configuring the command channel to block and to timeout after 60seconds
    #
    fconfigure $cc -blocking 1 -timeout 60000;

    # close the command channel after the exit of the piped child process
    #
    if {[catch {close $cc;} result]} {
	if {($::errorCode != {}) && ($::errorCode != "NONE"} {
	    # detect the timeout error thrown by tcl
	    #
	    if {[lindex $::errorCode 1] == "ETIMEOUT"} {
		puts stderr "timeout while closing command channel $cc";
	    } else {
		puts stderr "piped child process exited abnormaly: $::errorCode";
	    }
	}
    }

Any opinions, suggestions, ...?