Creating Commands

Tcl provides several commands which create commands and script evaluation contexts.

The purpose of this page is to enumerate and classify those commands.

  • [proc] creates a Tcl script based procedure (function) which can be defined to have a fixed or variable number arguments, with default values for the fixed arguments.
  • [apply] creates an anonymous command [*]
  • [interp create] creates a named slave interp command
  • [interp alias] creates a named alias to another command
  • [thread create] creates an opaque token associated with a thread
  • [namespace ensemble] creates a namespace command from a namespace
  • [class create] creates a named class command
  • [$class create] creates an opaquely named object instance command
  • [coroutine] creates a named coroutine command
  • Tcl_CreateObjCommand (and friends, see manual pages [L1 ], [L2 ], [L3 ]) is the C API for creating a command object in a specific Tcl interpreter.

[*] apply doesn't create a Tcl command in the same way the other command generators do; it doesn't add anything to the interpreter's command tables. Rather, the "command prefix" [apply $lambda] can be used like a Tcl command in most contexts.


generatorcmd?#argsinvocationdestructor
procyesanyby name
coroutineyes1by name
Tcl_CreateObjCommandyesanyby namesee man page [L4 ]
namespace ensembleyesanyby namenamespace delete
class createyesanyby name$class destroy
$class createyesanyby name$obj destroy
interp aliasyesanyby nameinterp alias
interp createyesany$interp eval$interp destroy
thread::createnoanythread::sendthread::release
applynoanyby referenceimplicit

NB:

  1. all of the forms above which generate a command can be cleaned up by [rename $name {}], nothing which does not generate a command can.
  2. all of the command forms above can be hidden using [interp hide] after which they can only be invoked by means of [interp invokehidden].
  3. all commands may be grouped (by name) into namespaces, and implicitly deleted en mass by namespace deletion.

Legend for above table:

ColumnDescription
generatorwhat command generates an instance of this form?
cmd?does this form construct a command?
#argshow many args does the constructed form take?
invocationhow is this form invoked?
destructorwhat explicit destructor disposes of this form?

Observations

The generation of commands appears to be a means of invoking the form and of controlling the resources associated with the form.

There are two exceptions: In the case of apply, resources are associated with a value, and the lifetime of the generated form is tied to that of the value. In the case of thread, explicit refcounting is employed. In all other cases [rename $name {}] destroys both the command and its associated form. In the case of namespace ensembles, that is the main method of destroying the command (that it gets destroyed when the namespace is destroyed is more of a side-effect; also note that several ensembles can have the same namespace).

Irregularities

interp-created commands duplicate functionality available through interp, to control interp resources. It would be more consistent with the other forms if $interp args were passed directly into the interp as if [interp eval]'d.

coroutine-created commands are the only forms restricted to single args.

NEM notes that coroutines are not general commands, but rather a communication and concurrency primitive. The nearest equivalent are channels, which do indeed take only a single argument (via puts).
AMG: With both coroutines and channels, you can create wrapper procs to encode/decode the communication protocol. For coroutines, the protocol is datagram-oriented: each datagram is one Tcl word. For channels, the protocol is stream-oriented.
(NEM Umm... coroutines are more like streams than datagrams).
AMG: How are they like streams? Data is delivered to coroutines exactly one packet at a time, whereas stream-oriented protocols deliver variable numbers of characters. It's possible to build one type of protocol on top of the other, but there's a fundamental difference at the low level.
NEM datagram protocols are usually fixed size packets, require explicit addressing, no prior set up of a "channel" and are unreliable. None of these things apply to coroutines. The coro command and yield are fairly direct analogues of puts and gets, with the exception that send/recv must occur in a strict interleaving.

Lars H: Still, now that coroutines are commands, they might as well try to act like it. Had the intention been that they should only be a "communication and concurrency primitive", then an ensemble like chan with subcommand such as

  • coroutine create cmd ?arg ...? —— née coroutine
  • coroutine interact coro ?datagram? —— née coro ?datagram?
  • coroutine destroy coro —— née rename coro ""
  • coroutine exists coro —— née string length [namespace which coro]
  • coroutine current —— née info coroutine
  • coroutine eval coro script —— "uplevel into a coroutine", doesn't exist (yet)

would seem more appropriate.

NEM is perfectly happy with that change, but would be interested in what way coroutines are not like commands, given that they are commands. The argument instead seems to be that coroutine is like proc so it should act like proc (rather than a particular proc). I reject the premise of that argument (lack of a parameter list for one).

Lars H: "Not like commands" — well, that would be CMcC's original quandry of why they should be restricted to taking only one argument, when there is no other class of commands that are so restricted. Your reason for why things should stay as they are was pretty much that one shouldn't think of coroutines as commands, and that's were I jumped on.

NEM There is the obvious class of commands that take one arg (eg 1 arg procedures). The argument is that no command creator generates only single arg commands, therefore coroutines should accept more than one arg. I think this is a straw man. If the argument is that the coroutine command should work more like proc and take a parameter list, then I would be interested in that. However, given it is simple to combine a coro with a proc currently, I'd like to see some compelling use cases for conflating their functionality.

AMG: Would [coroutine create cmd ...] create a command such that [coro ?datagram?] behaves the same as [coroutine interact coro ?datagram?]? I assume coro would be the return value of [coroutine create]. Also, would it be possible to use rename to give the coroutine command a specific name? If so, would it be possible to pass the new name as the coro argument?

Lars H: In analogy with channels, I imagined coroutine create to merely return a token, not create an accompanying command. This is just a "what if coroutines had been patterned after streams rather than subroutines" fantasy, provided as a test of NEM's claim that they actually were.

I wonder about [coroutine eval]. It can currently be implemented by passing the script as the argument to the coro command, but the coroutine has to be designed to expect yield to return a script to eval. If that's not the only thing that the coroutine can do, the argument to the coro command would have to be tagged, e.g. [coro {script {puts moo}}]. If the coroutine is already designed for this, [coroutine eval] adds no new functionality; if the coroutine is not designed for this, [coroutine eval] probably doesn't make sense anyway. What happens when the eval'ed script completes? Is the interpreter result value yielded? Does this process affect the "instruction pointer" of the coroutine's execution frame? Speaking of yield, how about changing [yield] to [coroutine yield], since it's only valid inside a coroutine anyway?

coroutines are different from the other types of commands in that they cannot take more than one argument. It's possible to cram any amount of data into that one argument, but this restriction does make it impossible to use them directly in a seamless implementation of common Tcl-style ensembles, options, command prefixes, etc. To smooth over this impedance mismatch, it's necessary to wrap coroutines with a proc that encodes its args into a list, and to do similar decoding on the return value of [yield].

Hmm, something came to mind. There's no reason why this wrapper proc has to be customized for the specific coroutine. Just do this:

proc invokecoro {coro args} {
    $coro $args
}

Now any coroutine that expects [yield] to return a list can be called with multiple arguments. The "function" (in the sense described by Higher order TIP discuss and [L5 ]) is a two-element list: the word "invokecoro" followed by the word that was the first argument to [coroutine]. Here's some craziness for you:

namespace eval frobozz_impl {
    namespace ensemble create -subcommands {print fprint crash math}
    proc print {args} {puts $args}
    proc fprint {channel args} {puts $channel $args}
    proc crash {} {error "oh no!"}
    proc math {args} {expr [concat $args]}
}
proc frobozz {} {
    set return [frobozz_impl {*}[yield]]
    while {true} {
        set return [frobozz_impl {*}[yield $return]]
    }
}
coroutine wingnut frobozz
invokecoro wingnut print hello world
invokecoro wingnut fprint stderr hello world
invokecoro wingnut math 2 + 2

I think this discussion might need to be merged into multi-arg coroutines.

NEM Yes, your invokecoro is discussed on that page as apply-list, and in the original TIP (which seems not to have been read much) as resume [L6 ].

CMcC What things 'are' is the opposite of abstraction. Programs 'are' just a bunch of bits, but we don't program in binary. One of the interesting things about computers is that they allow you to simulate more abstract things using less abstract things. In this case, if we take NEM's word, one can simulate generalised commands using coroutines. Except that, (almost) uniquely, coroutines pass only one arg.

NEM Passing a single arg is not a limitation either in theory (cf. lambda calculus) or in practice. The functionality is there and works, this discussion is just about syntax.

CMcC Given that most commands in existence take more than one argument, it is impossible to simulate such commands with a command which only takes one. This is what I consider a limitation in practice.

NEM It's not impossible, it's very easy. You just need to curry your coro command when you pass it as a callback:

# Generic procs:
proc curry {cmd args} { list resume {*}$cmd {*}$args }
proc resume {coro args} { tailcall $coro $args }

# Usage:
coroutine tracer ::apply {{} {
    while 1 {
        lassign [yield] name1 name2 op
        puts "VAR TRACE: $name1 ($name2) $op"
    }
}}
set foo 1
trace add variable foo write [curry tracer]

CMcC Given that most commands in existence take more than one argument, it is impossible to simulate such commands with a command which takes only one. I consider this a limitation in practice.

Regarding "NEM notes that coroutines are not general commands"...

In retrospect, and with a few months to let the dust settle: I have never claimed that coroutines *are* general purpose commands, any more than I would claim that threads were. I merely believe that they should be able to *emulate* general purpose commands, as can each of the other command-creating-commands listed above. This more modest requirement, and its attendant great utility, to me make ::yieldm a self-evident good. Whatever coroutines are, it makes no sense to me to arbitrarily limit what they can do.