This project is aiming to develop a tool to support students in acquiring mathematical competence
in linear algebra. Therefore, we aim to give the student suitable feedback during his work on an exercise.
The tool takes the full application of the relevant rules out of the hands of the student. This way he can not make any calculation errors anymore and he saves a lot of time on tedious calculations. But much more important, we hope that he now realizes that he has to determine which rules to apply, instead of just applying one that seems to fit the current mathematical formula: it is the mathematical competence that matters and not just understanding of the procedures.
Furthermore, because the student indicates which steps he makes, the system can follow his steps, enabling feedback on his path towards a solution, and to give him hints to increase his mathematical competence. Figures 1 and 2 show an example problem, the first step towards a solution, a selection in the second step and the rules applicable for that selection.
|Figure 1. Working window, where linear algebra formulas are manipulated.||Figure 2. Rule window, to select rules that are applicable to the selection made in the working window.|
The system also supports more theoretic problems (Figure 3).
The current formula layout seems not yet optimal. In following stages in the project, these interfacing issues will be explored in more detail.
Furthermore, we are now in the position to start with the heart of the work: the feedback.
|Figure 3. First few steps of a more abstract problem.|
Pasman, W. (2006). Linear Algebra Experiment.
Technical Report, Delft University of Technology, june 26.
Pasman, W. (2005). Electronic Linear Algebra Trainer:
Towards a System Configuration.
Technical Report, Delft University of Technology.
Pasman, W., & Jansen, F. W. (2006).
E-Learning Tool voor Lineaire Algebra [E-Learning Tool for Linear Algebra, in Dutch].
Technical Report, Delft University of Technology, February.
Running since second half 2005 at 0.3FTE per month.
This project is funded by Delft University of Technology.
Current massive ad-hoc agent worlds leave the user lost between the services.
In this project we developed a way to support human users to find the service they need.
We built on top of existing methods for robust natural language processing and agent technology.
A key part of our approach is a service matcher agent that helps the user with this task. We also proposed an 'architecture' suited for ad-hoc networks. Our architecture mainly consists of a set of detailed negotiation standards (ontologies and FIPA style protocols). The architecture brings context information into the ad-hoc agent world (see figure below). It enables context dependent searches in a broad sense: the user's history, location and gaze direction, task relations, and preferred tasks/services are all considered in the search for a matching service.
The figure below shows the interface to the service matcher. Here the user can type his request/needs in natural language. Of course other natural language input methods, for instance handwriting, could be used instead. The service matcher agent then tries, considering the current context, to find an agent that understands and can handle the request. Because of the ad-hoc aspects of network and agents, our architecture uses separate distributed, lightweight natural language parsers for each agent.
|Small part of an ad-hoc agent world. Purple blocks are agents of various types. Arrows indicate location relations ("In") and task relations ("StepOf").||Interface to the service matcher agent.|
Pasman, W., & Lindenberg, J. (2006).
Human-Agent Service Matching using Natural Language Queries:
System Test and Training. Personal and Ubiquitous Computing.
Lindenberg, J., Pasman, W., Kranenborg, K., Stegeman, J., & Neerincx, M. A. (2006).
Improving Service Matching and Selection in Ubiquitous Computing Environments:
A User Study. Personal and Ubiquitous Computing.
||Pasman, W. (2004). Presentation Slides: Organizing Ad Hoc Agents for Human-Agent Service Matching.
Mobiquitous 2004 (Boston, MA, August 22-25), 278-287.
|Pasman, W. (2003). Cactus Testbed. Internal Report, September, Cactus Project, Delft University of Technology.|
Finished. Ran between 2001 and 2005. Two papers on performance tests with human users are accepted and waiting publication.
The project was mainly funded by the National Freeband Knowledge Impulse.
Delft University of Technology and TNO made smaller contributions.
The NISHE project was a short project to realize augmented reality with large virtual models on a PDA. We focused on an architecture scenario, such that architects can show their designs to the customer at the real location.
|PDA showing a parking lot augmented with a new building.||Simplified architecture of the NISHE system. Transparent overlay images are calculated at a remote server.|
We have a video nicely illustrating the system at work, projecting a new building over the nearby parking lot. Click on the image to start the video.
Finished. The project ran between november 2002 and march 2003.
|Pasman, W., Woodward, C., Hakkarainen, M., Honkamaa, P., & Hyväkkä, J. (2004). Augmented Reality with Large 3D Models on a PDA – Implementation, Performance and Use Experiences. Int Conf on Virtual-Reality Continuum & Its Applications in Industry (VRCAI, Singapore, June 16-18),344-351.|
|Pasman, W., & Woodward, C. (2003). Implementation of an Augmented Reality System on a PDA. Proc. International Symposium on Mixed and Augmented Reality (ISMAR'03, Oct 7-10, Tokyo, Japan), 276-277.|
|VTT (2003). Augmented Reality on PDA. Project webpage.|
The project was financed by the Multimedia Group at VTT Finland.
In the Ubiquitous Communications project we did research on augmented reality for
wearable systems. We developed a latency layered rendering architecture
to enable AR on a wearable platform, with highly limited
resources as CPU and memory.
The inner layer had only 8 milliseconds latency, which was including both rendering and displaying. This was achieved by rendering just ahead of the raster beam.
Higher layers simplified the scene appropriately for the lower layers, fitting their resources. We analysed the available simplification methods that could reduce required resources, and made a mathematical model that could estimate the resource requirements at lower levels.
|Simplification methods. Clockwise, starting at top left: original complex 3D object, 3D simpified object (red), meshed imposter (cyan), layered depth image (purple) and simple imposter (green).||Mathematical model estimating required resources (here bandwidth requirements). Colors represent different simplification techniques (left figure).|
|We built a "portable" demonstrator that could render a statue on the campus, using these techniques.||Video explaining our approach. Click on image to play. The full video is available at the ubicom website.|
|Pasman, W., & Jansen, F. W. (2003). Comparing simplification and image-based techniques for 3D client-server rendering systems. IEEE Transactions on Visualization and Computer Graphics, 9 (2), 226-240.|
|Pasman, W., & Jansen, F. W. (2001). Realistic low-latency mobile AR rendering. Proc. VAA01 (Trinity College, Dublin, Ireland, 21-22 June), 81-92.|
|Pasman, W., & Jansen, F. W. (2002). Scheduling Level of Detail with Guaranteed Quality and Cost. Proc. Web3D Conference (February 24-28, 2002, Tempe, AZ), 43-51.|
Finished. Ran between 1997 and 2001.
The project was financed by Delft University of Technology, as one of the DIOC interdisciplineary 'spear head' projects..