Example Scripts Everybody Should Have

There are many simple example scripts that every Tcl/Tk programmer should keep around for reference. They are good examples of Tcl/Tk techniques, and they're just plain handy sometimes. (Thanks to Harald Kirsch for suggesting the collection on comp.lang.tcl) Also think about The RC File.

Description

LV: today I am only partially able to access the internet - it would be useful for someone with full access to look through the following list and submit Feature Requests to http://tcllib.sf.net/ to get some of this great code into a tcllib distribution.

Unix-like utilities

here is Perl's rendition of the same.

awk
diff in Tcl
a trifle long, so it's off in its own page.
find
see fileutil, a module in tcllib
grep
See also fileutil, a module in tcllib
ls -l in Tcl
also a trifle long, and off on its own page.
od
Dump a file in hex and ASCII
sed
sort
tr
uniq
wc
anytop
indirectly Unix-inspired and perhaps a useful stub

Miscellaneous

Debugging
Tk events
Balloon help
Re-encoding strings
Advanced split
simple socket communication
Sample Math Programs
try ... finally ...

find

proc find {{basedir .} {filterScript {}}} {
    set oldwd [pwd]
    cd $basedir
    set cwd [pwd]
    set filenames [glob -nocomplain * .*]
    set files {}
    set filt [string length $filterScript]
    foreach filename $filenames {
        if {!$filt || [eval $filterScript [list $filename]]} {
            lappend files [file join $cwd $filename]
        }
        if {[file isdirectory $filename]} {
            set files [concat $files [find $filename $filterScript]]
        }
    }
    cd $oldwd
    return $files
}

FP: This (slighty modified) code is now included in Tcllib

package require tcllib 
set listOfFiles [fileutil::find $_basedir $_filterScript]

staale: I had to change (using windows ActiveState) the first line above to:

package require fileutil

As I'm new to tcl I would not know if the original syntax would work on other platforms.

arjen It is safer to use package require fileutil, since you can install and use individual packages from Tcllib.

Example usage:

All *.txt files:

find . {string match *.txt}

All ordinary files:

find . {file isfile}

All readable files:

find . {file readable}

All non-empty files:

find . {file size}

All ordinary, non-empty, readable, *.txt files:

proc criterion filename {
    expr {
        [file isfile $filename]   && [file size $filename] &&
        [file readable $filename] && [string match *.txt $filename]
    }
}
find . criterion

All directories containing a description.txt file:

proc isDirWithDescription filename {
    if {![file isdirectory $filename]} {return -code continue {}}
    cd $filename
    set l [llength [glob -nocomplain description.txt]]
    cd ..
    return $l
}
find . isDirWithDescription

sed

Doing a full sed replacement is hard. It is better to just write a Tcl program/procedure from scratch that implements the functionality. This is probably easier though if someone implements some kind of line iterator (suitable for most simple uses of sed where all you want to do is apply a substitution to each line of a file/stream or only print some lines.) It is probably easier to just run sed externally (with [exec]) for anything that is very complex and where you've not the time to reimplement.

However, as a little goodie, here are some alternatives to very common sed commands...

sed "s/RE/replacement/" <inputFile >outputFile

proc substFile {regexp replacement inputFile outputFile} {
    set fin [open $inputFile r]
    set fout [open $outputFile w]
    while {[gets $fin linein] >= 0} {
        regsub $regexp $linein $replacement lineout
        puts $fout $lineout
    }
    close $fin
    close $fout
}

sed "s/RE/replacement/g" <inputFile >outputFile

proc substGlobalFile {regexp replacement inputFile outputFile} {
    set fin [open $inputFile r]
    set fout [open $outputFile w]
    while {[gets $fin linein] >= 0} {
        regsub -all $regexp $linein $replacement lineout
        puts $fout $lineout
    }
    close $fin
    close $fout
}

sed "y/abc/xyz/" <inputFile >outputFile

proc transformFile {from to inputFile outputFile} {
    set map {}
    foreach f [split $from {}] t [split $to {}] {
        lappend map $f $t
    }
    set fin [open $inputFile r]
    set fout [open $outputFile w]
    while {[gets $fin line] >= 0} {
        puts $fout [string map $map $line]
    }
    close $fin
    close $fout
}

sort

Yet again, we have a little program that really does an awful lot. If we don't care about options and argument processing, it can be done as a one-liner:

puts -nonewline [join [lsort [split [read stdin] \n]] \n]

A full sort has plenty of options for controlling the various parts of this process. Mapping the whole of the functionality is tricky, but we can build the core of it and still get something useful.

proc sort {args} {
    
    ### Parse the arguments
    set idx [lsearch -exact $args --]
    if {$idx >= 0} {
        set files [lrange $args [expr {$idx+1}] end]
        set opts  [lrange $args 0 [expr {$idx-1}]]
    } else {
        # We need to guess which are files and which are options
        set files [list]
        set opts [list]
        foreach arg $args {
            incr idx
            if {[file exists $arg]} {
                set files [lrange $args $idx end]
                break
            } else {
                lappend opts $arg
            }
        }
    }
    
    ### Read the files
    set lines [list]
    if {[llength $files] == 0} {
        # Read from stdin
        while {[gets stdin line] >= 0} {lappend lines $line}
    } else {
        foreach file $files {
            if {[string equal $file "-"]} {
                set f stdin
                set close 0
            } else {
                set f [open $file r]
                set close 1
            }
            while {[gets $f line] >= 0} {lappend lines $line}
            if {$close} {close $f}
        }
    }
    
    ### Sort the lines in-place (need 8.3.0 or later for efficiency)
    set lines [eval [list lsort] $opts \
            [lrange [list $lines [set lines {}]] 0 0]]

    ### Write the sorted lines out to stdout
    foreach line $lines {
        puts stdout $line
    }
}

e.g. Sort two files of numbers, producing a single file sorted in decreasing order sort -integer -decreasing -- numberfile1.txt numberfile2.txt

uniq

This is actually one of the simplest Unix utilities, but even so I will only produce a cut-down version here. The only option I support is -c (to precede each line by the number of times in sequence it occurs.)

proc uniq args {
    ### Parse the arguments
    if {[llength $args] && [string equal [lindex $args 0] "-c"]} {
        set count 1
        set args [lrange $args 1 end]
    } else {
        set count 0
    }
    # No args is equivalent to specifying stdin
    if {![llength $args]} {set args -}

    set last {}
    set line {}
    set n 0
    foreach file $args {
        if {[string equal $file "-"]} {
            set f stdin
            set closeme 0
        } else {
            set f [open $file r]
            set closeme 1
        }
        while {[gets $f line] >= 0} {
            if {[string equal $line $last] && $n>0} {
                incr n
            } else {
                if {$count} {
                    if {$n>0} {puts [format "%4d %s" $n $last]}
                } else {
                    puts $line
                }
                set last $line
                set n 1
            }
        }
        if {$closeme} {close $f}
    }
    if {$count && $n>0} {
        puts [format "%4d %s" $n $last]
    }
}

tr

The following implementation of tr shows how [string map] can be used to transliterate characters. A production implementation would probably maintain a cache of the translation list going into [string map], but this implementation gives the basics. Note that the implementation didn't need to do anything special to be Unicode-aware; you can, for instance, use it to substitute Katakana for Hiragana by doing [tr \u3041-\u309e \u30a1-\u30fe $japaneseString]

# This procedure implements a [[tr]] command akin to the shell one.
proc tr { from to string } {
    set mapping [list]
    foreach c1 [trExpand $from] c2 [trExpand $to] {
       lappend mapping $c1 $c2
    }
    return [string map $mapping $string]
}

# This helper procedure takes a string of characters like A-F0-9_ and expands it to a list of characters like {A B C D E F 0 1 2 3 4 5 6 7 8 9 _}

proc trExpand { chars } {
    set state noHyphen
    set result [list]
    foreach c [split $chars {}] {
       switch -exact -- $state {
           noHyphen {
               set lastChar $c
               lappend result $c
               set state wantHyphen
           }
           wantHyphen {
               if { [string equal - $c] } {
                   set state sawHyphen
               } else {
                   set lastChar $c
                   lappend result $c
               }
           }
           sawHyphen {
               scan $lastChar %c from
               incr from
               scan $c %c to
               if { $from > $to } {
                   error "$lastChar does not precede $c."
               }
               for { set i $from } { $i <= $to } { incr i } {
                   lappend result [format %c $i]
               }
               set state noHyphen
           }
       }
    }
    if { [string equal sawHyphen $state] } {
       lappend result -
    }
    return $result
}
 
# The following call shows that rot13 works

puts [tr A-Za-z N-ZA-Mn-za-m {Guvf vf n grfg}]

# The following call shows the use of tr to eliminate certain characters 

puts [tr A-Z {} {THISthis ISis Aa TESTtest}]

There are at least two other ways to achieve tr worth mentioning TclX implements it, and if tr is available externally, it's quick to write

proc tr { from to string } {
    return [exec tr $from $to << $string]
}

wc

The following implements the default behavior of wc, i.e. when called without any flags.

proc wc {filename} {
    foreach i {l w c} {
        set $i 0
    }

    set f [open $filename]

    while true {
        set txt [gets $f]
        if [eof $f] break

        incr l
        incr w [regexp -all {[^[:space:]]+} $txt]
        incr c [expr {[string length $txt] + 1}]
    }

    close $f
    return [list $l $w $c]
}

Debugging

proc Exec {args} {
  puts stderr $args
  eval exec $args
}

DKF: If you want to do more advanced debugging (like determining exactly how a piece of code is compiled and executed) then it is more easily done using the following script:

proc tracedEval {script {execLevel 3} {compileLevel 2}} {
    global tcl_traceCompile tcl_traceExec errorCode errorInfo

    # Set the debugging levels
    set tcl_traceCompile $compileLevel
    set tcl_traceExec $execLevel

    # Execute the script, handling any failures
    set code [catch {uplevel $script} msg]

    # Restore the debugging levels to normal running.
    # I assume that you normally have these vars set to 0
    set tcl_traceExec 0
    set tcl_traceCompile 0

    # Pass on all errors thrown in the script to our caller
    return -code $code -errorcode $errorCode -errorinfo $errorInfo $msg
}

Please note that you only get results if you have a real console attached. On some platforms, this is a pain...

Tk Events

— especially keypress events

This little script will echo all keyboard events to stdout:

bind . <KeyPress> {
    set dec ""; scan %A %%c dec
    puts "keysym:%K prints:%A ($dec)"
} ;#-------------- and to turn this off again:
bind . <KeyPress> {}

This proc will bind a new event to ".". It can show you when mouse events occur, if you bind it to things like Enter or Motion.

proc unk {e} {bind . <$e> "puts $e"}

If you have an event driven program, end it with tclx mainloop.

Then you can

tcl> source program.tcl

(churnchurnchurn)

tcl> 

You have an event driven command prompt while the program is running. You can dump arrays, invoke routines, run profile, all sorts of things. Mainloop is smart enough so that when you run it stand-alone

tcl program.tcl

It does the right thing.

May all your programs be event driven.

DKF: Note that people running wish on Unix platforms can simply use [send] as a way to attach a console to a running Tcl/Tk program. The distributed demo script, rmt does this, (and is perfectly adequate for a lot use), or you could use Jeff Hobbs's tkcon which is more sophisticated.

JDG: If you want a very simple command line, you can also use the interp.tcl script for the plain-text interface to JStrack , which hasn't been touched for almost a decade. Download the JStrack source from jstrack.org and look in the tracker/lib directory. interp.tcl is there. This is part of JStrack (a very ancient part, I might add), and was originally provided by Jeff Hobbs, with a lot of help from Brent Welch (et al) in expanding it. Just source it as the *LAST* line in your script.

JDG 2010-06-25: FWIW, I now use Jeff Hobbs's tkcon.tcl and the tkcon_wrap.tcl ... and really need to update JStrack to use these, as well.

Re-encoding strings

If (in tcl 8.1 or above) tcl has misinterpreted the charset of some string gotten from an external program or system function, and you are left with a string with Latin-1 accented characters (\u0080-\u00FF) instead of your language letters, you can fix the string using

set fixed_string [encoding convertfrom $right_encoding [encoding convertto $wrongencoding $string]]

where variable $right_encoding contains the name of the encoding string really had (typically same as your encoding system) and wrong_encoding is encoding tcl assumed string has (typically iso8859-1).

Re-encoding files

Here's a unix-style filter that reencodes text.

#! /usr/bin/tclsh
# (or whatever incantation works for you)

proc main {fromTransl fromEnc toTransl toEnc} {
    fconfigure stdin -translation $fromTransl -encoding $fromEnc
    fconfigure stdout -translation $toTransl -encoding $toEnc
    fcopy stdin stdout
}
main {*}$argv ; # Leverage error thrown by procs for reminder of correct syntax

Examples:

cat macClassicFile.txt | ''scriptname'' cr macRoman lf utf-8 >macOSXFile.txt
cat macOSXFile.txt | ''scriptname'' lf utf-8 cr macRoman >macClassicFile.txt

Advanced split

If you need to split string into list using some more complicated rule than builtin split command allows, use following function

proc xsplit [list str [list regexp "\[\t \r\n\]+"]] {
    set list  {}
    while {[regexp -indices -- $regexp $str match submatch]} {
       lappend list [string range $str 0 [expr [lindex $match 0] -1]]
       if {[lindex $submatch 0]>=0} {
           lappend list [string range $str [lindex $submatch 0]\
                   [lindex $submatch 1]] 
       }       
       set str [string range $str [expr [lindex $match 1]+1] end] 
    }
    lappend list $str
    return $list
}

cary: It mimics Perl split operator which allows regexp as element separator, but, like builtin split, it expects string to split as first arg and regexp as second (optional).

By default, it splits by any amount of whitespace.

Note that if you add parenthesis into regexp, parenthesed part of separator would be added into list as additional element. Just like in Perl.

Simple socket communication

Volker: The beauty of scripting lies (a.o.) in the fact that you can have lots of little programs cooperating with each other. For this, a simple means of communication is necessary. This simple server (and even simpler client) show how it works:

Server:

#!/bin/sh
# \
exec tclsh "$0" "$@"
proc serveConnection {Handle} {
    set LineLength [gets $Handle Line]
    if {$LineLength>=0} {
        #This is where you finally can do something with the data.
        #We simply put it back where it came from.
        puts $Handle "Received: $Line"; flush $Handle 
    } elseif {[eof $Handle]} {
        catch {close $Handle}
    }
}
proc acceptConnections {ConnectionFileHandle ClientAddress ClientPort} {
    fconfigure $ConnectionFileHandle -blocking 0
    fileevent $ConnectionFileHandle readable [list \
            catch [list serveConnection $ConnectionFileHandle]]
}
socket -server acceptConnections 2000
vwait Dummyvariable

The server above is good enough for single line requests. It handles all the interaction between gets, eof and fileevent correctly (IMHO) and can certainly serve several connections at once. It can cope with clients closing their sockets at any time.

If you don't specify the port number explicitly (2000 in this case), but specify 0, the socket finds the first free port. You can then ask the handle for its port number and try to get it to your clients. for instance by writing it into a file or something.

The handle is the (ignored above) return value of the socket command.

Client:

#!/bin/sh
# \
exec tclsh "$0" "$@"

set Handle [socket localhost 2000]
puts -nonewline $Handle "Hello"; flush $Handle 
puts $Handle " Dolly"; flush $Handle 
puts [gets $Handle]
close $Handle 

The client demonstrates that the server does indeed accumulate a complete line first, before it gets processed. the magic behavior of gets is largely responsible for that, which makes the server so simple.

Well, that's it! :-)


Pelle Otterholm: Here is another version of the server part that you might want to use.

#!/usr/bin/env tclsh

proc done {} { set ::done 1 }

proc process {fd ip port} {
    if {!([eof $fd] || [catch {gets $fd line }]) } {
        if {[string length $line] == 0} { return }
        if {[string match exit $line]} { close $fd ; return }
        if {[string match done $line]} { close $fd ; done ; return }
        puts $fd "input : $line"
    } else {
        catch { close $fd }
    }
}

proc accept {fd ip port} {
    fconfigure $fd -blocking 0 -buffering line
    fileevent $fd readable [list process $fd $ip $port]
}
proc listen {port} {
    socket -myaddr 127.0.0.1 -server accept $port
    vwait done
}

listen 2000

AMG: Shouldn't the [eof] come after the [gets]? The [eof] man page [L1 ] says:

"Returns 1 if an end of file condition occurred during the most recent input operation on channelId (such as gets), 0 otherwise."

Also, the return value of [gets] should be checked so that you can differentiate between receiving a blank line and having an error or end-of-file. Two-argument [gets] will return -1 on error and end-of-file (use [eof] to tell the difference), 0 on blank line or no-data-available (when the channel is nonblocking; use [fblocked] to test this).

Pelle Otterholm: the code is based on same coding idea as Simple TCP/IP proxy, also changed to line buffering and move gets $fd file into the check, getting : "if {!([eof $fd] || [catch {gets $fd}]) }"


DKF: There is a more advanced version of this on the telnet page.

SDW: has put together an example of Remote Script Execution using safe interpreters.



Check out the verbose_eval script over on eval page!