The 'Freiburg' project is an replacement for shared libraries on [Unix], [Windows] and Mac OS X ... [the 'Freiburg project FAQ] ---- '''goals''' ---- * convert a shared library into an ''module'' * every ''module'' uses its ''own'' object-space (i.e., it is an executable) * an application is a collection of ''modules'' * the ''modules'' communicate with a ''data-bus'' * the ''data-bus'' is implemented as a shared/static library * the ''module'' can reside local or remote * if ''remote'' the ''module'' is started by the ''inetd'' server and using tcp sockets * if ''local'' the module is started by the application starter and using uds sockets * the ''module'' can be created with any programming language as long the the language is able to link against the ''data-bus'' * the ''module'' is independent of the application interface (i.e., no dynamic linking) * once a ''module'' is created it can be used with any programming language ---- '''usability study 1''' ---- I maintain an application based on REMEDY-Ars running a workflow application. The application is running on a SUN server using a local SYBASE database. To monitor the server I wrote an ARS ''module'' to check the health status of ARS and the workflow application. The ''module'' was developed on SUN Solaris using a gcc compiler and a non free REMEDY-Ars API. To monitor the server I'm using ''moods'' as front-end and the ''module'' as data provider. The ''moods'' application is using a graphic and a text based output devices. The quality of information provided combined with the easiness of using ''moods'' created a ''killer'' application with a strong request for additional installations. To minimize the installation requirements and maximize the support quality I decided to run the ''module'' on a ULTRA SPARC 10 and only ship a generic ''moods'' application for different kind of UNIX and WINDOWS environments. With using the ''module'' technology I do not need to port the data provider to different environments and was able to support environments with no api support by the REMEDY-Ars supplier. [RHS] It sounds like an implementation of a ''Service Oriented Architecture''. While each ''module'' can provide functions, they're not libraries per se, so much as ''services'' that other modules or programs can then use. ---- '''the 'Freiburg' project and the 'command line'''' ---- The traditional ''unix'' command line was developed to create complex filters based on simple tools: tool1 | tool2 | tool3 1. How it works -> a tool is nothing more than a ''module'' with specific capabilities to manipulate an input stream. The collection of tools combined with ''pipes'' is an application. The communication between the tools based on '''strings''' with no additional format information. The problem is to implement for ''every'' tool a parser to split the input stream into useable data blocks. 2. Evolution step one -> instead using a '''string''' based data stream you can use a '''structured string''' based data stream (e.g. ''xml'' is an example). This make it easy to parse the data stream and add additional capability to handle complex data types (e.g., float, integer ...). 3. Evolution step two -> instead of using one-direction communication you can use sockets to allow ''answers'' to be sent back to the parent. |-tool2 tool1-| |-tool3-|-tool4 |-tool5 More information will be available. ''[escargo] 21 Aug 2004'' - Couldn't the distinction between step one and step two also be described as using ''objects with string-based serializations'' being sent between your ''modules''? As long as all communicating modules agree on the mechanisms for encoding and decoding, it does not matter if your serialization mechanism is [XML], [YAML], or anything else. (In fact, if it's properly layered, you could even be going through an ssh tunnel.) -> YES ---- '''the 'Freiburg' project and 'Tcl'''' ---- This is an example-link between 'Freiburg' and 'Tcl' package require TestOnFreiburg set FH [TestOnFreiburg::Start -name TclTestOnFreiburg -tcp -inetd -host myserver -port myapp] set RET [TestOnFreiburg::Echo2 $FH -int2 12345] puts "Answer: $RET" rename $FH "" description: start ''test'' based on 'Freiburg' using the ''module'' on server ''myserver'' using port ''myport'' and call procedure ''Echo2'' with a 2-byte integer 12345 as argument the communication between ''application'' and ''module'' is based on different data-types INT2 .... 2 byte integer INT4 .... 4 byte integer INT8 .... 8 byte integer FLT4 .... 4 byte float FLT8 .... 8 byte float PTR ..... pointer STR ..... \0 terminated string BIN ..... binary LST ..... list can contain all types [RHS] Given the ''rename $FH ""'', it seems like ''$FH'' is a command. If so, why not make it so that you can call it directly to get results, ie: set RET [$FH Echo2 -int2 12345] because it is just an wrapper for: proc TestOnFreiburg::Echo2 { Ctx args } { return [$Ctx send -wait -type ECO2 $args] } description: * -wait, open an transaction (e.g. expect result) * -type, used typed arguments (if no -type tcl will choose the right data-type) * ECO2, use function defined by token ''ECO2'' ---- '''the 'Freiburg' project and 'C'''' ---- The 'Freiburg' project was written in 'C' an 'C' is the native interface: int main(int argc, char **argv) { ContextS *context; // the initial context BufferLS *largv = BufferLCreateArgv(NULL, --argc, ++argv); MsgqueS *a_msgque; INT2 ret; // create the ContextS ErrorCheck(ContextCreate(largv, &context)); // call Echo2 struct SendS * const send = context->msgque->send; SendSTART(send); Send2(send, 12345); ErrorCheck(SendEND(send, "ECO2", NULL /* transaction not needed */)); // do some work .... // read the result ErrorCheck(ReadHDR(context->msgque->read, &a_msgque)); ErrorCheck(Read2(a_msgque->read, &ret)); printf("Answer:%i\n", ret); error: ErrorSSend(CONTEXT_SAVE_ERROR(context), __func__); ContextDelete(&context); exit(EXIT_SUCCESS); return 0; } ---- '''name resolution (e.g. finding the right service)''' ---- Name resolution is only useful for remote services. Remote services are started by the inetd daemon using a specific port on a specific host. To uniquely define a service only the hostname and the portname are necessary. The name resolution is based on already available technology like [DNS], [NIS], or [LDAP]. ---- ''[escargo] 14 Sept 2004'' - It's not clear to me how many instances of a ''module'' might exist. If I am using a [Unix] command shell and create a pipe between commands, I know that, because of the Unix architecture, I'm likely to share an instance of the command's object code with other users, while I still have my own instance of the command's data. It's not clear where the state for a ''module'' resides. It's not clear how modules are shared. Are modules supposed to be ''reentrant'' or ''serially reuseable''? Do modules have to be locked so that they can complete processing complete requests? How do they signal success or failure? ---- [Category Architecture]