|Administration | Research | Teaching | Professional | Personal | Photos | Railways | Site map|
Judy Sheard and I have teamed up with Silvia Edwards and Peter O'Shea at QUT for a project entitled "Changing the Culture of Teaching and Learning in ICT and Engineering: Facilitating Research Professors to be T&L Leaders". This has been funded by the Australian Learning and Teaching Council ($530K over 2.5 years).
PIAVEE has moved on. It is being re-engineered into a new system IVEE. IVEE (short for IntelliVEE, or Intelligent Virtual Educational Environment) is characterized by a more open approach to indexing, and uses the IBM UIMA framework to achieve this. It also uses a lightweight web browser client architecture, rather than PIAVEE's heavy native code clients. A new website is being constructed.
PIAVEE Platform Independent Agent-based Virtual Educational Environment. Making development and dissemination of on-line course materials hassle-free.
ICTEd Project An AUTC-funded project to explore and disseminate innovative and/or best practice teaching in Information and Communications Technology.
Document Technologies XML and Stuff. These web pages that you are reading now are rendered dynamically from XML sources using a custom-built cgi script. You can read about the Python cgi script as a literate program.
Literate Programming: See the separate web page for more detail.
These projects are not classified by level, but could be undertaken at a range of levels: Honours, Masters, Doctorate; depending upon aims and scoping.
An XML-based literate programming tool has been developed. One disadvantage of using such tools, however, is that one has to keep referring back to the literate source to make modifications, and then running through the tangling phase (at least) to get a new working copy of the code.
With the advent of tools like XSugar (cite?), it becomes relatively easy to swap between XML and non-XML representations of the same source text, and this project is to explore how XSugar might be used to create an (admittedly language specific) representation of the entire literate text. The XML representation can be easily converted to a language-specific form, where sufficient information is stored in comments for the process to be reversible. This would allow development of the code to proceed in the normal way, with extraction of the XML form at various stages of development, for use as an XML literate program (generating HTML browsable images, for example).
As part of an on-going process to better utilize the advantages of information technology in a university learning environment, a recent student project looked at ways of extracting course and unit information from publically available handbooks, and turning this into an "expert system" that could provide course advice to students.
One outcome from this project was the automatic rendition of a Prolog program that was tailored to a given student at some arbitrary point in his or her course. Knowing what units had been completed, what units the course structure required, and what units might be undertaken in the next semester, the program would not only supply the student with a list of possible units that could be taken over the next semester, but also would show what units had to be taken in order to complete the course with (say) a specialization in a particular field.
The project showed the feasibility of this approach, but did not deliver a workable prototype. This project would be to take the existing work, and (re)engineer it to the point of completing a workable prototype that could be employed via say a web interface, or faculty kiosk.
Knowledge of Prolog would be useful, although not essential. A copy of the thesis is available on line at Patrick Frey's thesis.
The PIAVEE (Platform Independent Agent-based Virtual Educational Environment) project aims to provide a flexible learning management system that uses agents to perform a lot of the tedious housekeeping chores associated with such systems, such as learning object indexing, link management, access control and IP issues. (See the PIAVEE website for more information.)
Two models of the project have arisen: the first (original) one, in which client side activities are handled by an installed client, communicating with a central server through the use of agents, and a second (derived) one, based upon a web browser running Java applets under the Laszlo execution system to provide an installation-free client-side model, with relatively minimal reliance upon agents.
The project is to look at integrating the two models, so that they can be used interchangeably, sharing common file formats, and allowing the use of the system on platforms that have only a web browser installed (with possible performance limitations).
The most expensive phase in the software life cycle is program maintenance, during which programs typically get modified so frequently that this phase may account for up to 70% of their total development cost. A major factor in this cost is the need for software engineers to re-engineer some (rarely all) of the program code to either fix bugs, or to develop new functionality. In both of these cases, having access to the design decisions made during initial and subsequent program development can be invaluable in terms of understanding the code.
Literate Programming offers a mechanism to maintain such information. Originally proposed by Donald Knuth in 1984, the original model required N*M processing tools, where N is the number of programming languages handled, and M is the number of documentation presentation forms required. The idea has seen a resurgence of activity with the advent of "second generation" (language independent) and "third generation" (WEB based) literate programming tools, both of which eliminate the dependency upon N.
The use of advanced macro processing features also eases the program maintenance task, through appropriate revision control and platform independence mechanisms.
A tool to implement literate programming using XML has been developed (see research/literate/index.xml), and currently supports documentation in HTML and TeX.
In this tool, diverse documentation forms are handled by first translating the literate program to an intermediate form, which is then subsequently translated according to the document layout and presentation rules of the final form. This means that M translators for each of M different presentation forms must be developed.
The difficulty with the second and third generation tools are that they are language independent, and thus cannot exploit knowledge of the language structure. This means that issues like variable recognition and markup, keyword highlighting, and so on, cannot be addressed either within the tangle or weave stages of processing.
The project is about building translators for each of the N language forms (in practice, only one or two will be required to show the feasibility of the approach). This reduces the original model from an N*M problem to a N+M one, without sacrificing some of the advantages of the original Knuth model.
The initial implementation of XLP uses default namespaces. This has the difficulty that the coding and documentation markups may overlap, leading to confusion and ambiguity in constructing literate programs.
This project is to implement a range of namespaces within XLP, in order to separate the coding and documentation domains, thus improving the generality of the tool.
Because the literate programming model described above is language agnostic, it does not know how to cross reference identifiers used within the program. It is suggested that this could be done by an indepenet processor, that marks up the text with appropriate information that could be used by the weave phase of the processing to provide cross referencing of identifiers. The markup processor could be language sensitive, thus allowing it to determine the lexical rules pertaining to the relevant identifiers.
The project is to define the architecture of such a processor, and to implement it.
Some years ago (1998, to be exact), some colleagues and I proposed a project to build a virtual campus. Unfortunately, it did not get funded at the time, and it lapsed. Pity. Monash would have stolen an edge on much research that has happened since.
See the web page at CORE
The Australian Government has published a document on Research Conduct. Apparently it is important.
|This page maintained by
Copyright Monash University Acceptable Use Policy
27903 accesses since 13 Sep 2006, HTML cache rendered at 20110330:1421