Version 74 of WubWikit

Updated 2008-08-26 11:36:08 by CMcC

[Fill in details about WubWikit, in particular, where to get the application, the differences between wubwikit and the traditional wikit, etc.]

Wiki News - announcements and such old, obsolete problem list for this wikit.


MHo:

  • With the old wiki(t.kit), it was very simple to grab a copy of the starkit, define some simple cgi-wrapper and drive this through Tclhttpd. I suppose the new WubWikit only works with Wub - as the name says ;-) So, WubWikit never runs as a standalone CGI-process, is it right?

CMcC You can get the code for wikit here: [L1 ] and for Wub here: [L2 ] you'll need subversion to grab it.

wiki database for offline use has "wubwikit.kit" that is all the pieces in one file.

MHo: Will there be binaries available sometimes in the future?

LV: Anyone have a recipe for bringing up and running this latest generation of the wikit? Anyone created a starkit with the necessary code in it?

  • Is a downloadable copy of the wiki pages theirselves available, such as wikit.gz??

LV If you mean a downloadable copy of the wiki pages, I don't think this is available yet. If you mean a downloadable copy of the executing code, see CMcC's comments. MHo Yes I mean a copy of the database with the wiki-pages, e.g. to make it offline available or to keep a backup for restauration in case of vandalism...

CMcC there's an interesting problem here: the wiki db is currently 95Mb, and clearly it would be a bad thing to allow too-frequent downloads (that's part of what brought the old wiki down - some clown doing multiple uncontrolled downloads - denial of service.) I'm trying to work out what a good policy might be to balance the utility of providing the download while bounding the cost.

Here are some policy questions to which I have no good answers:

  • What if everyone on the wiki wanted a daily db backup? That's about 90Gb/day, won't work. What about hourly?
  • What if we put a limit on the number of downloads per day? How do we stop one of the daily (hourly?) backup crowd from grabbing one of the slots without requiring login?
  • Should we require actual login for db downloads?
  • Could/should we require everyone who downloads the db provide it for others to further download? How would we do that? -- Lars H: This is exactly the kind of problem BitTorrent aims to solve, is it not?
  • How often should a given user be permitted to download a copy of the db? If they are doing it frequently, they're wasting enormous amounts of bandwidth.
  • Should there be some kind of db delta download? Like RSS like history: give me all the pages which have changed since <some date>

LV The primary issue here is a limited resource - your bandwidth. So why not upload the data, compressed, to google, and let people download it from there. That way, the bandwidth issue is no longer yours - you upload to google once every N period (whatever you can afford) and let people get it from google. As for what the db delta - I don't have a need for that, a simple compressed .tkd file is all I need.

30jun07 jcw - The quick solution is to simply delegate: get one or more mirrors in place with a reasonably recent copy (I'd vote for daily). Then you don't have to deal with it. As for delta's: rsync is very effective for this type of data (either via ssh or as rsync server). If you want to go fancier, check out wikit's "-update" command (written before the history got added to the db, btw).

KJN 2007-11-23: I would occasionally download a copy of the database if I was going somewhere without fast internet access - it was very useful to have this facility. A daily, or even weekly, database dump to Google would be much appreciated. Page histories could be omitted. BitTorrent would also be a good solution.

jdc 7-dec-2007: I created a SourceForge project to download the wiki db: wiki database for offline use


LV Over on comp.lang.tcl, some details about this version of the wiki are being discussed. In particularly, there are particular URLs which cause specialty functionality to occur.

        http://wiki.tcl.tk/18028
        http://wiki.tcl.tk/18028.txt
  • a URL of the form http://wiki.tcl.tk/<PageNumber>.code returns all code blocks on the page. Code blocks are displayed separated by ### <code_block id=$blockid> ############################################################ tags. A code block is just one or more lines starting with a space or blocks delimited by ====== lines.
        http://wiki.tcl.tk/15312.code
        http://wiki.tcl.tk/18028.str
        http://wiki.tcl.tk/_edit/18028
        http://wiki.tcl.tk/_history/18028
        http://wiki.tcl.tk/_ref/18028
        http://wiki.tcl.tk/_revision/18028?V=47
        http://wiki.tcl.tk/_revision/18028.txt?V=47
        http://wiki.tcl.tk/_revision/18028.str?V=47
        http://wiki.tcl.tk/_revision/18028?V=40&A=1
        http://wiki.tcl.tk/_diff/18028?V=42&D=43
        http://wiki.tcl.tk/_diff/18028?V=50&D=51&W=1