The project requires skills in parallel computing, Unix and C, as well as exisiting debuggers like gdb.
Some more details can be obtained from [http://www.dgs.monash.edu.au/research/guard/].
Typogenetics, which is short for Typographical Genetics, was developed by D.R. Hofstadter to capture some of the concepts of molecular genetics in a typographical system. The system involves manipulating strings, called strands, consisting of four characters A, C, G, and T. These strands are operated on by enzymes which are sequences of basic operations. In turn, the enzymes are created from strands.
So given a strand we can create an enzyme which can then be applied to the original strand to produce a new generation of strands. These new strands can then be used to create new enzymes which can then be applied to the current generation of strands to produce the next generation, and so on. If at any stage we have two copies of the original strand in one generation, we say that the original strand is self-reproducing. While, if at any stage we are left with no strands, we say the original strand is self-destroying.
Interesting questions are:
The aim of this project is to investigate the field of Typogenetics, and to answer some of these questions. In particular it is envisage that the project will involve the development of a system that allows a user to:
Hunt the Wumpus was an early computer game created by Gregory Yob. Since then it has been used in AI as a testbed environment for intelligent agents. The aim of this project is to use this game as a testbed for developing techniques that learn users' profiles. The reason this game has been chosen is that it is simplified version of a domain we are planning to investigate in our work on user profiles. This project will involve creating agents that can operate in an existing Wumpus World simulation written in C++, and implementing a system that performs user profiling. These agents will need to be able to
There is a research project to develop a data mining software library and graphical user interface. A mixture of Java and C is being used. Graphical components will be written in Java (Sun Microsystems). Numerical routines will be written in Java or C as appropriate and this may include adapting existing code. There may be opportunities for honours projects for good programmers in this area. The ability to work with others is important.
The project is to write a translator from Java to C. It would then be possible to get the Software Engineering advantages of writing in Java with the speed of C. There would be losses in some areas, e.g. array bounds checking and security. The aim is to retain as much of the structure of the Java source program in the translated C, not just to treat C as a `machine code'. There may well be java2c translators in existence but this would not necessarily abort the project; a quick search failed to find any public translators.
Previous experience and demonstrated ability in interactive computer graphics is a necessary prerequisite for this project. The aim is to provide a 3-D world, distributed over many servers and clients, and to investigate and solve the problems of maintaining consistent views and time-lines for the users. Jamie Cameron's 1995 "indoors" world may provide some inspiration [http://www.cs.monash.edu.au/hons/projects_1995/Jamie.Cameron/index.html] In 1997 Eugene Ware produced an "outdoors" world programmed in Java and VRML (not available in '95). The aim is to develop this world further.
The aim of the project is to build a graphical environment in which objects such as graphs and networks can be interactively edited and manipulated. Information about the object, such as vertex degrees, cycle lengths, diameters etc. is to be displayed, and this displayed information must be updated as changes are made. The system must enable the user to define their own type of combinatorial object, and to specify what sort of information is to be displayed and maintained about the objects. Object-oriented design will be important. Programming will be in Java.
This project will involve developing a multi threaded servlet in Java and a client applet with a GUI. The servlet is required to generate and assess multiple choice questions using a questions database. The questionaire is then forwarded on to the client applet in a form as determined by servlet settings and user priviliges. The applet sends user responses back to the servelet for evaluation. This system allows for multiple simultaneous use of the questions server whilst maintaining data security and verifying user identity.
Although the project will involve considerable Java programming (around an existing framework) the research will be mainly concerned with electronic assessment methods that seek to address inherent problems associated with multiple choice tests.
The assessment methods should foster the students' cognitive abilities and measure their true state of knowledge. One the assessment methods of the multiple choice responses will incorporate a scheme which allows students to express their level of confidence in their answers and discriminates between students' knowledge states. This method is aimed at encouraging honesty.
To enhance the feedback to students and academics the scoring system not only calculates a mark but also calculates a self estimation factor, and a trust factor. The self estimation is the overall mark that the student expects to achieve based on the given responses. If the student's confidence levels are appropriate and justified, the self estimation will closely match the actual overall mark. The trust factor is a measure of the trustworthiness of the given information. A trust factor of 100% indicates that optimal results can be obtained by completely adopting the student's answers. Conversely, a trust factor of 0% indicates that no information about the actual correct answers can be gained from the student.
The application will be Web based with protection against unauthorised access through server and Java implemented security.
Many of the programming mistakes made by novice programmers result from their inability to correctly translate an algorithm to code. This project aims to produce software which assists them in overcoming these such problems by automatically translating their erroneous code back to algorithmic language (with annotations indicating possible errors).
In taking on this project all of the following would be of definite advantage:
Hypercard, Supercard, and Metacard are all hypermedia, stack-based, WYSIWYG, persistent, event-driven, rapid software development systems. Each provides extensive customizability through a proprietory scripting language.
The aim of this project is to create a free implementation of a similar application (LlamaCard), using Perl as the scripting language and various standard Perl libraries to help implement the GUI, so that the entire package is platform-independent.
Experience with one of the above applications and with Perl programming would be highly advantageous.
DEMUN (Distance Education Mark-up Notation) is a tag-based text annotation system for building complex, highly interlinked, interactive WWW-based course materials from simple ASCII text files.
The aim of this project is to build a DEMUN-to-HTML converter that is capable of automatically and incrementally generating:
Implementation would probably be in the Perl language, so experience in Perl would be a significant advantage.
This project builds on some existing work done in C++ where a different and (hopefully) better approach was tried. This project performs an AI search to piece together data which happens to come from the Human Genome project. There is a need to view results of the search, so a GUI using either Java or tcl/tk will probably also be developed. (No knowledge of biology/biochemistry is required!)
Joyce-Linda is a language that forces the use of parallel processes. The Linda communication model is provides a convenient mechanism for many sorts of parallel programs. This project involves changing the Joyce-Linda compiler and runtime support code to run with different packages, including running across a network.
This project involves the design & construction of an anmiation tool for the production of visual effects utilizing large numbers of tiny objects known as 'particles'. Particles have been used in the past to model clouds, steam, fog, fire, sparks and similar real world phenomena. In this project, not only are particles subject to global external forces such as gravity and the effects of wind, the particles may interact with one another in a manner akin to chemical interactions.
For example, a number of blue particles may hang freely and unchanging in space. When a red particle is introduced to the system, the blue particles are pulled towards it. When they come into contact with the red particle they too become red and are thrown violently away from the other red particles in the vicinity, producing a dazzling blue implosion followed by an explosion of red streaks.
The student should have a thorough understanding of C or C++ and should have completed 3rd year graphics. It is suggested that the student should also participate in the Graphics and Artificial Life course at HONS level (csc415)
SUGGESTED INTRODUCTORY READING:
Much work has been done at Monash in the areas of unsupervised and supervised learning by Wallace, Dowe and others for uncorrelated data. Some work has also been done at Monash using these techniques for correlated data. This project will head in that direction.
A reasonably strong mathematical background will be required. Programming will almost certainly be in C. This project is closely related to a project done by Russell Edwards in 1997. It has the potential to go in either parallel or tangential directions. There is no need to have done any 3rd year subject on AI. However, if doing this project, you are strongly encouraged to take my 4th Year Hons. 2nd semester subject CSC423 Learning and Prediction, which is the only Hons. subject teaching any amount of MML.
Finite state machines can be used to model grammars or syntax. Some bodies of data can reasonably be assumed to have come from some underlying, but unknown, grammar (or finite state machine). When the data is of great interest to us, we will be interested in inferring the finite state automaton from which the data came. This project will use the Minimum Message Length (MML) principle and will be quite mathematical in nature. It will build upon work done at Monash by Wallace and Georgeff (1983) and more recently by some of their collaborators.
Artificial sample data will initially come from generating data from some model and then seeing how well the program can discover it. Real sample data to be analysed will come from DNA, proteins and speech patterns. There is no need to have done any 3rd year subject on AI.
A strong mathematical background will be required. Programming will almost certainly be in C. If doing this project, you are strongly encouraged to take my 4th Year Hons. 2nd semester subject CSC423 Learning and Prediction, which is the only Hons. subject teaching any amount of MML.
Attaching to predictions an indication of how certain the predictor is, and rewarding such predictions properly, are important issues in many fields. This project focuses on football tipping because it is topical, accessible and may be useful in teaching. The project is partly software engineering, and partly implementation of ideas concerning prediction and inference.
For the last three years, the Department of Computer Science has run a football tipping competition in which participants must nominate, for each game, not only which team they think will win, but a probability that that team will win. Tips are scored according to a simple formula, and the theory is linked to information theory and gambling theory. This year the competition was extended to allow participants to nominate a mean and standard deviation for the margin of each game. Again, there is a soundly based way to score such tips. The competition is currently run using software written in C++ (with a curses interface) by John Hurst. The software is written as a literate program (nutweb), and managed by a version control system (RCS), currently at www.csse.monash.edu.au/~footy/ .
The aim of the project is to implement new probabilistic football tipping ideas in software, and to extend the software so that the competition can be run over the World Wide Web.
In more detail, the main tasks of the project are to:
Neural networks (NNs) are a technique in machine learning and "data mining" which are often useful but which have some notoriety for over-fitting the training data and then predicting comparatively poorly, for needing a lot of human tuning and for being "black box" predictors which obscure any semblance of an underlying model. This project is concerned with using Minimum Message Length (MML) to correct these problems for relatively simple neural networks. MML is robust to over-fitting noise, and fits explicitly parameterised models which predict well. The project will entail using MML to model the logistic or sigmoid function in NNs under the guidance of the supervisor, and to use MML to balance the cost of the number of hidden layer nodes and the inter-connection weights with the goodness of fit to the data. Some acquaintance with NNs and a reasonably strong mathematical or statistical background are essential.
According to both the principles of MML and of Kolmogorov complexity, the most likely theory to infer from a body of data is that which gives rise to the shortest two-part compression of the data. However, on occasions, this will not give rise to a simple explicit model of the variable we wish to predict explained as a function of the other, explanatory, variables. One of many cases in point is when the variable of most interest to us is unobserved but known to come from a distribution between 0 and 1; but we do observe a second variable which functionally depends upon it. We have to balance our prior beliefs regarding the likely values of the variable of interest against the values which would have been more likely to cause the observed value of the second variable.
D L Dowe and C S Wallace (1998). Kolmogorov complexity, minimum message length and inverse learning, abstract, page 144, 14th Australian Statistical Conference (ASC-14), Gold Coast, Qld, 6 - 10 July 1998.
Wallace, C.S. and D.L. Dowe (1999). Minimum Message Length and Kolmogorov Complexity, accepted, in press, to appear, Computer Journal.
Reading : D L Dowe and A R Hajek (1998). A non-behavioural, computational extension to the Turing Test, pp101-106, Proceedings of the International Conference on Computational Intelligence & Multimedia Applications (ICCIMA'98), Gippsland, Australia, February 1998.
relevant web size .
Reading : 1) C. S. Wallace, A long-period pseudo-random generator, Tech Rept #89/123, Dept of Computer Science, Monash University, February 1989.
2) Find Random Number Generator by doing the following:
1. Go to here 2. Click on " C Programming " on left of the screen 3. Click on " Code Snippets " 4. Under the heading " Portable functions and headers " click on " Random number functions " 5. Click on " Rand1.C "
The Efficient Markets Hypothesis (EMH) asserts, even in its weakest most innocuous forms, that in a market situation with no insider trading and all having access to the same public knowledge, supposedly no trading strategy can be expected in the long-term to outperform the market average. The cause of this misconception is due at least in part to the intractability of finding patterns in data that might appear to be random both on the surface and even after some analysis. Dowe and Korb (1996) have argued why markets must almost always be inefficient, and Dowe, Korb and Stillwell (in preparation) are demonstrating with empirical real-world data that practice indeed seems to follow theory. In this project, we instead create artificial, simulated markets with market agents trading with some strategies and price being driven by a combination of some underlying function and the trading strategies of the participant agents. The student will use Minimum Message Length (MML) or related machine learning techniques to discover inefficiencies in such artificial markets and also to discover trading strategies which beat the market average in the long run.
Graphs and networks are useful models of many different kinds of information: electronic circuits, software engineering diagrams, communications networks, the WWW etc. Many applications require such graphs to be laid out in space in some way, perhaps for display as a diagram or for implementing as a circuit in several layers in VLSI. In some such cases, it is important to represent the graph in 3 dimensions, and also that any angles where edges meet or bend be right angles. Such a drawing is called a 3D orthogonal graph drawing.
For an example of such a drawing, see http://www.cs.newcastle.edu.au/~richard/phd/k7-wood.gif. (The graph drawing underlying this picture was found by local PhD student David Wood, and the picture itself was made by Richard Webber (PhD student, University of Newcastle).)
The aim of the project is to develop a tool which allows 3D graph drawings to be manipulated interactively. The tool will use the C++/Java constraint solving toolkit QOCA to ensure that the constraints of orthogonality etc are properly handled. A graphical editor for 3D orthogonal graph drawings will need to be written, probably in VRML. (Prior experience in VRML is not necessary.)
The tool should provide an interesting case study in the application of QOCA, and should be useful in current research on 3-dimensional orthogonal graph drawing. (Indeed, its desirability became apparent in research by David Wood, supervised by Graham Farr.)
Strict Minimum Message Length (SMML) is a criterion (due to Wallace) by which models of data can be assessed. It can be shown to have many desirable properties, and is important in the theory of machine learning. It is, however, very difficult to compute, except in the simplest cases. In the binomial case, we have an exact and efficient algorithm, but the trinomial case is significantly harder, and may well be NP-hard. The aim of this project will be to implement and study some algorithms for cases such as the binomial, trinomial and normal, and thus hopefully shed light on the theory. A strong mathematical background is required.
For many purposes, it is convenient to regard DNA as a long sequence of symbols, where each symbol is one of C, G, A, T. It is currently not possible for scientists to determine the entire sequence for complex life forms. Small fragments of the sequence can be found, but then there is the problem of piecing the fragments together in the correct order to determine the entire sequence.
This project looks at a class of algorithms for this problem based on a divide-and-conquer approach. Algorithms will be implemented and studied experimentally.
Some background in discrete mathematics would help.
Programming will be in C++ and will build on the Leda package for combinatorial computing.
The project is part of work I am doing with Prof. Peter Eades (University of Newcastle).
The sort of games considered here are board games like Chess. A long term aim is to be able to take as input a record of the chess games of (e.g.) Garry Kasparov (World Champion), and infer something (but not everything!) about his chess playing strategy. This may be an ambitious goal, but we propose to move toward it in achievable steps. Initially, a program capable of inferring very simple aspects of strategy would be developed, and tested using records of games played by appropriately simple computer players. We have developed some basic methods for doing the inference, and expect to improve on them. Several types of inference are possible; among these, we intend to apply the principles of Minimum Message Length (MML) inference. This project builds on a 1997 project by Tony Jansen.
Programming will almost certainly be in C. If doing this project, you are strongly encouraged to take the 4th Year Hons. 2nd semester subject CSC423 Learning and Prediction, which is the only Hons. subject teaching any amount of MML. (See http://www.csse.monash.edu.au/~dld/chess.html.)
(It is not clear whether these projects will run. See Graham Farr if interested.)
Image tracking is a computationally expensive process. In recent times there have been a number of attempts to implement image tracking in hardware. Some recent results are very encouraging; leading us to believe that an implementation in FPGA using VHDL and the Mentor Graphics ECAD software, will be quite successful. The hardware component of this project would involve the acquisition of an off-the-shelf FPGA board to which the memory (to hold the images) has to be interfaced.
SGML (or "Standardised General Markup Language") is a standard for defined document class and structure, and there have been a number of tools to automate the processing of such documents (see for example: http://www.sgmltools.org).
Unfortunately, SGML is not suitable for delivering documents to the web, and a new standard, XML (or "eXtensible Markup Language") has arisen. XML is a subset of SGML, and is designed specifically to facilitate the delivery of documents on the web. For example, automatic conversion of XML documents to HTMl is possible, as is conversion to LaTeX/TeX, groff, rtf or even just plain ASCII text!
These all depend upon two basic tools, modelled fairly closely on compiler technology: a parser that determines that a document is "well-formed" and "valid"; and a back-end that translates to the document form required. Just as with compiler technology, much research has been invested into making these two processes table driven, and this project is about investigating these two aspects.
DTD, or "Document Type Definition" files describe the structure of documents, and how they are to be parsed. DSSSL, or "Document Style Semantics and Specification Language" files are about how the documents are rendered into presentation form (TeX, HTML, etc.). In addition, there are other standards such as XLL ("XLink Language") that provide models for document analysis, search and retrieval operations to the web.
The project is to investigate the use of XML as a document paradigm, to define appropriate DTD and DSSSL prototypes, and to write code to produce working models of the tools described above. Real life examples will be drawn from the university environment. A knowledge of HTML (and to a lesser extent, LaTeX, TeX, and rtf) formats would be helpful, although not essential.
The most expensive phase in the software life cycle is program maintenance, during which programs typically get modified so frequently that this phase may account for up to 70\% of their total development cost. A major factor in this cost is the need for software engineers to re-engineer some (rarely all) of the program code to either fix bugs, or to develop new functionality. In both of these cases, having access to the design decisions made during initial and subsequent program development can be invaluable in terms of understanding the code.
Literate Programming offers a mechanism to maintain such information. Originally proposed by Donald Knuth in 1984, the idea has seen a resurgence of activity with the advent of "second generation" (language independent) and "third generation" (WEB based) literate programming tools. The use of advanced macro processing features can also ease the program maintenance task, through appropriate revision control and platform independence mechanisms.
This project is about exploring these issues, and developing a state-of-the-art literate programming tool.
This project involves extending and improving BPP. Possible tasks include:
Robots generally are programmed by leading through the required motion with a teach pendant. Off-line programming which is less common, is available on two of our industrial robots permits the bulk of programming to be written without accessing robot motion. Viewing operation of robot has to now required presence at the robotic cell, by using image data via the internet it is possible to observe and control robot tasks from any connected terminal.
Users will require images of the robot work cell for controlling moves and monitoring robot execution, and interface to the robot server for loading, compiling and control of robot. The Telerobotic System, Figure 1. shows how a request from an operator to the HTTPD server launches a CGI script that communicates with the robot image servers to perform the requested move and obtain new pictures. The robot, robot controller and robot server are existing equipment.
User interaction is proposed by using Common Gateway Interface (CGI) to control physical devices by reading input from an HTML form, performing the required operation, and feeding back the result on a returned HTML page. Java offers possibilities for increasing the sophistication of the interface.
The image server development including three camera system for one robot cell three dimensional viewing connected to image server will be the first phase. This work will present images to HTML page for Teleautonomous control in discrete command control.
This work involves interface controls for trajectory control and motion kinematics, and off-line programming facility to http server. Multiple observers and off-line programmers with one active user are the conditions to be satisfied through the server.
Currently, when performing analytical flutter clearances for the RAAF, a technique is used which, whilst giving an accurate estimate for the flutter speed, can result in non-physical solutions remote from a flutter point. It would be desirable to implement a technique which gives realistic solutions throughout the range of desired airspeeds for two reasons: i) greater faith in the results in the mind of the customer; and ii) such physically realistic solutions can be used in flight-flutter trials to monitor the performance of the mathematical model well before a potential flutter region is approached.
A method of solution which achieves the above aims does exist, but, for a number of reasons, it is computationally intensive and, therefore, time consuming. One of the main difficulties with this method is that sufficiently small increments in airspeed must be chosen such that the results at previous airspeeds can be used to predict a possible solution from which to start the iterations for the next airspeed. The question is then, what is a sufficiently small increment? In early implementations of technique, a uniform, very small increment was used for all of the roots to be solved. What is really sufficiently small, for any given mode, however, depends very much upon the behaviour of that mode in the region of interest. In a previous Honours project, a student developed some algorithms to vary the airspeed increments for any given mode as the airspeed is increased. This honours project will build on that work by:
A technique has been developed by which creating the mathematical model is treated as a highly complex, non-linear optimisation process. All of the physical parameters required to define the model can be considered as unknown. Depending upon the model, the number of properties to be determined can vary from a few to hundreds. The best approach which has been found to-date to tackle such a problem is by use of Genetic Algorithms (GAs).
Genetic Algorithms have been used with considerable success at AMRL on the problem of deriving optimal structural dynamic models for aircraft structures. Such a problem involves optimising of order 100 parameters (ie. carrying out an optimisation in 100 dimensional space). Expermental data is collected during the shaking of an aircraft in a ground vibration test (GVT) and then a GA is used to create a model which gives good correlation with these data. Inherent to such a process is the issue of model complexity (ie. how many parameters can be determined given a set of experimental data; this is also addressed in the model optimisation process, but outside of the GA.
A means of distributing such tasks around a network seems to have been developed at the Oak Ridge National Laboratory in the US. This system is called a "Parallel Virtual Machine" (PVM) ( http://www.epm.ornl.gov/pvm/pvm_home.html). The proposed project would involve investigating how such a PVM could be set up for a collection of PCs using Windows NT and Windows 95/98 and hopefully demonstrating a simple parallelised process on two or more such machines. Then, investigating how the GAs described above could optimally be distributed amongst a range of machines over a network which will have differing processor speeds and differing amounts of spare processing capacity. Also, a means by which the system would be sufficiently robust to handle machines coming on and off line during processing would be required.
How much of the above proposal would be achieved over the course of a final year project will clearly be dependent on the problems which are encountered during the course of the project.
This project would involve several extensions to the basic existing hardware and software implementation.
This project would suit a BCSE student who has completed CSC2091/3091 Artificial Intelligence and who can program in C. In addition, non-engineering student could also do this project, focusing on the software aspects.
Andrew Paplinski will be on sabbatical in 2nd semester so will not be offering a project in 1999.
If we select a pixel at random from a picture, it is highly likely that the pixel will be in the same segment as its nearest neighbouring pixels. The next most likely outcome is that a pixel and its nearest neighbours are likely to be in no more than two segments. That is, the block straddles the boundary between two regions. If a pixel and all its neighbours are in the same segment, then we may use all the pixel values in subsequent processing of the current pixel. If we determine that some neighbouring pixel values are not in the same segment as the current pixel, then we should exclude those pixels from processing of the current pixel.
The aim of this project is to investigate ways of looking at small blocks of pixels and deciding whether the pixels in the block all belong to the same segment or whether they belong to at least two different segments. Minimal Message Length (MML) Inductive Inference gives us a way of choosing between two possible models or explanations for a body of data. In this case, MML techniques will be used to decide whether a block of pixels can be best described by saying that all pixels in the block belong to the same segment or they belong to at least two different segments.
The process of classifying blocks of pixels as consisting of only one segment or at least two segments can be termed local segmentation. Local segmentation is of fundamental importance in an image coding technique called Block Truncation Coding. In addition, local segmentation can be used in algorithms for edge detection, segmentation and for noise removal.
Binary images are picture whose pixels are either black or white. Binary images are often used to represent faxes or pages of documents. The Fax group 3 and group 4 compression algorithms were designed to compress binary images which consist of pages of text. These algorithms are quick and achieve reasonable results on pages of text but they perform poorly on other kinds of binary images.
The algorithms in the JBIG standard were designed to adapt to the characteristics of the binary image and perform very well across a wide range of binary images. The JBIG algorithms perform better on images of text than the Group 3 and Group 4 algorithms and can still achieve good compression on other kinds of binary images, e.g. scanned pictures and dithered pictures. The JBIG algorithms are more expensive to implement and they make use of a form of arithmetic coding which has been patented by IBM.
The aim of this project is to develop a way of compressing binary images of text that will deliver as much compression as the Group 3 and Group 4 algorithms, will adapt to the characteristics of the binary image and will deliver better throughput rates than the JBIG algorithm. Ideas to be investigated include: transforming the binary image, use of run-length coding, use of adaptive modeling, Rice coding and chain coding.
Block Truncation Coding (BTC) is a lossy method for compressing image data. It can also be applied to the lossy compression of sound data. BTC methods are relatively inexpensive to implement and BTC-coded images can be decoded very efficiently but they require relatively high bit rates to achieve a particular level of image quality.
The aim of this project is to investigate the use of prediction and interpolation in BTC coding. In addition, the coding may either use a combination of Rice coding and run length coding or it may use arithmetic coding. The aim is to see how well images can be encoded at the kinds of bit rates that are used in digital television. In video sequence coding, images might be encoded as the difference from a previous image or an image might be coded without referring to any other images. It would be interesting to see how well a BTC-based coding scheme can perform on both kinds of images.
The aim of this project is to use Minimal Message Length (MML) techniques to come up with a good objective criteria for comparing segmentations of the same image. The approach involves constructing two-part messages for each segmentation. The first part of the message allows us to recover the segmentation of the image. Once the segmentation has been recovered, it is used in conjunction with the information in the second part of the message to recover all the information in the original image. The use of the MML criterion suggests that the two-part message with the shortest overall message length is the one with the best segmentation.
The project involves looking at ways of encoding a segmentation and then using the segment information to encode the image. One approach is to associate a segment number with each pixel and to form an image of the segment numbers. This image we can call the segment map. Segment maps can be regarded as being generalised forms of binary images and effective techniques for the lossless coding of binary images may be generalised and applied to segment maps.
This project investigates genetic and other algorithms for the optimisation of fuzzy controllers in terms of membership functions and linguistic rules.
Existing processor architectures are mostly designed and optimised for crisp logic processing. This project seeks to establish and simulate some alternative processor architectures dedicated for fuzzy rule processing. It involves analytical study, VHDL modeling, simulation and FPGA implementation if possible.
Thresholding is one of the most popular approaches to image segmentation. In this approach, all the pixels having a certain range of some image property, say, intensity, are considered to belong to one group. Connected regions in these groups lead to the desired segmentation. In the past, the probabilistic entropy measure of Shannon, defined in the context of Information Theory, has been successfully applied in determining the gray level threshold between the "object" and the "background" regions of an image, assuming separate probability distributions for the object pixels and the background pixels. Image segmentation methods also exist in which non-probabilistic entropic criteria, devised in the fuzzy set-theoretic framework, have been used.
An Honours project carried out in the year 1997 involved the study of two probabilistic distance criteria, namely, the Bhattacharyya distance and Kullback-Leibler Divergence function, as image thresholding criteria. The purpose of the proposed project will be to make further investigations in the area of probabilistic distance-based image segmentation methods and their applications for both monochrome and colour images.
The aim of the project will be to make a critical study of a number of existing non-hierarchical cluster analysis methods, such as, c-means, fuzzy c-means, and isodata, with a view to their application in image segmentation. Study of cluster validity will form an essential part of the project. Applications will include both monochrome and colour images.
Content-based image retrieval is a well-known method of image retrieval. The objective of this project is to devise an algorithm for retrieval of images from a large image data base based on colour quantisation and regional quantisation.
Texture plays an important role in both human interpretation of visual scenes and computer analysis of images. Textural cues are of particular relevance in two different, but related, image analysis problems, namely, the problems of (i) segmentation and (ii) classification of images. The proposed project will deal with both of these problems. It will involve two phases. In the first phase some existing texture analysis methods will be investigated from the point of view of their theoretical soundness as textural measures as well as their practical applicability. In the second phase attempts will be made to derive a new methodology with particular attention to its computational efficiency.
The objective of document image analysis is to recognize text, graphics, and pictures in printed documents and to extract the intended information from them. There are two broad categories of document image analysis, namely, textual processing and graphical processing. Textual processing includes skew determination (any tilt at which the document may have been scanned), finding columns, paragraphs, text lines and words, and performing optical character recognition. Graphical processing deals with lines and symbols. The scope of the proposed project will be the aspect of text processing and it will concentrate on the development of a system for page layout analysis.
Experience shows that in recognition of patterns by humans, the recognition is made based on only a few important features (or attributes) characterizing the pattern classes. Conversely, in statistical pattern recognition, the patterns are often represented by a large number of numerical features. Although there is no conceptual justification in reducing the number of features to a small number, in practical problem solving, this becomes a necessary step due to the wellknown phenomenon of the 'curse of dimensionality' of the feature vector on the complexity of the pattern classifier. The aim of the proposed project is to develop an interactive feature selection paradigm well-suited to multiclass (rather than 2-class) pattern recognition problems.
The project will comprise four components, namely, (i) theoretical comparison of existing feature selection criteria, (ii) development of software tools for the existing criteria, followed by an experimental investigation of them, (iii) analysis of the above results leading to, hopefully, a feature selection paradigm, and (iv) development of an interactive software tool for the above paradigm. The software developed will be 'interactive' in the sense that depending on the classification accuracy arrived at a certain stage, the user will have the option of changing the dimensionality value. Options will also be available to supply interactively a range of values of the dimensionality. The software tools will include procedures for displaying the distribution of pattern samples in different feature spaces obtained by different feature selection methods.
a) an investigation of speech coding techniques at low bit rates;
b) an implementation and evaluation of a 16 or an 8 kbps speech codec in the form of either software or hardware.
This project, which is a continuation of a project undertaken in 1998, considers the problem of automatically generating summaries from documents on the WWW. The student will use simple statistical methods to determine ``important information'' from the articles. This information will be collated into summaries using simple linguistic techniques. This years' project uses a corpus of documents obtained from Telstra, which are more homogeneous than those used in last year's work. This feature is expected to have some impact on the summarization techniques being applied.
A student undertaking this project should have some grounding in statistics.
This project continues similar projects undertaken in previous years. It deals with the development of complex agents which interact with a user. These agents should combine planning and problem-solving capabilities as well as emotion-related features. The focus of the project is to study the interaction between an agent's emotional state and its resources (e.g., planning ability, memory) with the goal of carrying out a dialogue (in limited form) with the user.
A student undertaking this project should have previously taken CSC3309 (Artificial Intelligence).
This project is a continuation of a 1998 project. It consists of designing and implementing a computerized tutor based on an animation process which illustrates the workings of SPIM, a MIPS simulator which is used for teaching computer architecture at first and second year level. The animation, which was implemented in 1998, highlights the different components of the architecture and the data flow for a small set of MIPS instructions. The animator may be accessed at this web site. The computerized tutor will determine parts of the subject in which the student's knowledge is inadequate. This will be done based on traces of the data path clicked by the student. The tutor will then decide on appropriate help actions, e.g., presenting explanations of architecture components or giving hints about how to proceed. Some of the data collection software required to support the tutor was implemented over the summer of 1999.
There is a possibility of some funding with these projects. Students interested should speak to Mark Jessell, and also Ann Nicholson, the honours coordinator before nominating these projects in their preference list.
Mark Jessell Australian Geodynamics Cooperative Research Centre Dept of Earth Sciences Monash University, Clayton, VIC, 3168 Australia email@example.com Tel (61)(3) 9905 4902 Fax (61)(3) 9905 4903Home-Page.