Previous: Competency mapping Main page Next: Method and results
Benefits of competency mappingCompetency maps have many potential benefits for students and teaching staff. Of course, because staff and students share many goals, these benefits are not entirely divisible; some aspects of competency mapping will benefit both staff and students. A partial list of potential uses for competency mapping follows. It is likely that more benefits will be discovered as the technique matures.
Of course, it is quite possible that the structure revealed by analysis of student results does not match the lecturer's idea of the conceptual structure of the course. In this case, the revealed structure may suggest ways in which the course can be improved. For example, if two competencies that should be related (for example, C pointers and passing by reference) are not clustered together, it could indicate a need to make the connection more explicit to the students. If the competency map uses all the coursework marks as input, this will not help the students of that year; however, it may well help teaching staff to refine the coursework for the next delivery of the course. It would also be useful to staff who are teaching follow-on courses, as they would gain a better idea of which topics need revision.
A competency map using only the marks for half of the course can be produced if staff wish to refine the course on the fly, but care must be taken that the data are sufficient: if the only marks on record are the first six prac marks, it is unlikely that any useful conclusions can be drawn. It is not yet certain how many points are needed for competency mapping to be useful, but it is likely to depend on the amount and complexity of the course material.
These uses assume that competency mapping will elucidate the structure of the course. If, however, the technique does not do this, then there are still potential benefits: logically, we would expect that activities that test strongly related competencies should show correlations in their marks; if this is not the case, there must be some reason. For example, written exam questions about linked lists might not correlate strongly with practical questions about linked lists if success in pracs is more closely related to factors other than subject knowledge. This could be the case if the some students find their work environment --- operating system, compiler and editor --- difficult to use. In this case, prac questions will tend to cluster much more strongly with other prac questions, and much less strongly with theory questions. The competency map can show that there is a problem; it is then up to the teaching staff to investigate that problem. Of course, competency mapping over subsequent years of the course will help the staff know when they have ameliorated the problem.
In order to satisfy Ethics Committee requirements, this project only uses deidentified marks data and no demographic information is available. However, in a university setting, competency mapping can be used to compare demographic subsets of students to verify equity of access. If it is suspected that there is a systematic problem with some students' access to education, for example if there is concern that students of non-English speaking background are finding a particular activity especially difficult because of the complex language used to explain it, then competency mapping can be applied separately to the results from students belonging to that group and the results compared to a competency map derived from the marks of the rest of the student body. In this case, a problem with English would result in a distorted cluster arrangement: written-answer questions and questions with complex requirements would tend to cluster together. The technique may also be used to determine whether female students conceptualise the subject differently to male students. Again, if a problem is found, competency mapping over subsequent years will show staff whether the remedies are working.
A constructivist view of the teaching process suggests that students will assimilate new knowledge and gain new skills more readily if they can be made aware of how those new competencies interrelate with knowledge and skills that are already mastered. Of course, lecturers know this; most new topics begin with an explanation of the new material in relation to material already seen. However, this explanation is almost always exclusively verbal. Information about relationships is often best presented in visual form, especially if the relationships are multidimensional: pictures are two-dimensional, but words are one-dimensional, strictly ordered in time. Therefore, having access to a two-dimensional map of the course structure may help students construct their understanding of the course material.
If it is possible to use competency mapping to break the subject down into components that are close to orthogonal, it should also be possible to design assessment on the basis of that breakdown. Once the components are known, assessment tasks can be designed that test them individually, or (since it is virtually impossible to test anything in isolation) as close to it as possible. Thus a test can be delivered to students that is quite small, but gives results that are interpretable in terms of the course's competency map.
Because competency mapping measures correlations between task marks across students, it is obviously impossible to generate a competency map based on a single student's data; however, numeric results can be presented alongside the group competency map --- for example, by shading regions that correspond to topics that the student needs to work on. In this way, a student may be able to use her test results to determine her own weaknesses, and then consult the map to see how they relate to the rest of the course: using this map and compass, she may find it easier to navigate through the material.
If she still has trouble understanding the material, she may ask a staff member for help. In this case, if the staff member has access to her test results, it would be easier to pinpoint the misconstruction that is at the heart of the problem. Experience shows that determining the problem is almost always harder and more time-consuming than solving it; figuring out what needs to be explained is more difficult than developing an explanation, especially considering that teachers can develop a set of explanations that work and re-use them. This means that the student need not worry as much about coming to consultation, and (because consultation time can be used more effectively) the teaching staff are more likely to be free to help her.