The 'Freiburg' project is an replacement for shared libraries on [Unix], [Windows] and Mac OS X ... ---- '''additional links''' ---- * [the 'Freiburg project FAQ] * [the 'Freiburg' project port to windows] * ['Freiburg' in detail: client/server versa parent/child] * ['Freiburg' in detail: create an snmp service/client] * ['Freiburg' in detail: using a superserver] ---- '''goals''' ---- * convert a shared library into an ''service'' * every ''service'' using its ''own'' object-space (i.e., it is an executable) * an application is a collection of ''services'' * the ''services'' communicate with a ''data-bus'' * the ''data-bus'' is implemented as a shared/static library * the ''service'' is available in ''local'' or ''remote'' mode * if ''remote'' the ''service'' is started by the ''inetd'' superserver * if ''local'' the ''service'' is started with the ''system'' system call * the ''service'' can be created with any programming language as long as the language is able to link again the 'Freiburg' library * the ''service'' is independent of the application interface (i.e., no dynamic linking) * once a ''service'' is created it can be reused with any programming language ---- '''usability study 1''' ---- I maintain an application based on REMEDY-Ars running a workflow application. The application is running on a SUN server using a local SYBASE database. To monitor the server I wrote an ARS ''module'' to check the health status of ARS and the workflow application. The ''module'' was developed on SUN Solaris using a gcc compiler and a non free REMEDY-Ars API. To monitor the server I'm using ''moods'' as front-end and the ''module'' as data provider. The ''moods'' application is using a graphic and a text based output devices. The quality of information provided combined with the easiness of using ''moods'' created a ''killer'' application with a strong request for additional installations. To minimize the installation requirements and maximize the support quality I decided to run the ''module'' on a ULTRA SPARC 10 and only ship a generic ''moods'' application for different kind of UNIX and WINDOWS environments. With using the ''module'' technology I do not need to port the data provider to different environments and was able to support environments with no api support by the REMEDY-Ars supplier. [RHS] It sounds like an implementation of a ''Service Oriented Architecture''. While each ''module'' can provide functions, they're not libraries per se, so much as ''services'' that other modules or programs can then use. ---- '''the 'Freiburg' project and the 'command line'''' ---- The traditional ''unix'' command line was developed to create complex filters based on simple tools: tool1 | tool2 | tool3 1. How it works -> a tool is nothing more than a ''module'' with specific capabilities to manipulate an input stream. The collection of tools combined with ''pipes'' is an application. The communication between the tools based on '''strings''' with no additional format information. The problem is to implement for ''every'' tool a parser to split the input stream into useable data blocks. 2. Evolution step one -> instead using a '''string''' based data stream you can use a '''structured string''' based data stream (e.g. ''xml'' is an example). This make it easy to parse the data stream and add additional capability to handle complex data types (e.g., float, integer ...). 3. Evolution step two -> instead of using one-direction communication you can use sockets to allow ''answers'' to be sent back to the parent. |-tool2 tool1-| |-tool3-|-tool4 |-tool5 More information will be available. ''[escargo] 21 Aug 2004'' - Couldn't the distinction between step one and step two also be described as using ''objects with string-based serializations'' being sent between your ''modules''? As long as all communicating modules agree on the mechanisms for encoding and decoding, it does not matter if your serialization mechanism is [XML], [YAML], or anything else. (In fact, if it's properly layered, you could even be going through an ssh tunnel.) -> YES ---- '''the 'Freiburg' project and 'Tcl'''' ---- This is an example-link between 'Freiburg' and 'Tcl' package require TestOnFreiburg set FH [TestOnFreiburg::Start -name TclTestOnFreiburg -tcp -inetd -host myserver -port myapp] set RET [TestOnFreiburg::Echo2 $FH -int2 12345] puts "Answer: $RET" rename $FH "" description: start ''test'' based on 'Freiburg' using the ''module'' on server ''myserver'' using port ''myport'' and call procedure ''Echo2'' with a 2-byte integer 12345 as argument the communication between ''application'' and ''module'' is based on different data-types INT2 .... 2 byte integer INT4 .... 4 byte integer INT8 .... 8 byte integer FLT4 .... 4 byte float FLT8 .... 8 byte float PTR ..... pointer STR ..... \0 terminated string BIN ..... binary LST ..... list can contain all types [RHS] Given the ''rename $FH ""'', it seems like ''$FH'' is a command. If so, why not make it so that you can call it directly to get results, ie: set RET [$FH Echo2 -int2 12345] because it is just an wrapper for: proc TestOnFreiburg::Echo2 { Ctx args } { return [$Ctx send -wait -type ECO2 $args] } description: * -wait, open an transaction (e.g. expect result) * -type, used typed arguments (if no -type tcl will choose the right data-type) * ECO2, use function defined by token ''ECO2'' ---- '''name resolution (e.g. finding the right service)''' ---- Name resolution is only useful for remote services. Remote services are started by the inetd daemon using a specific port on a specific host. To uniquely define a service only the hostname and the portname are necessary. The name resolution is based on already available technology like [DNS], [NIS], or [LDAP]. ---- [Category Design]