Posters ISUSI 2000:

User-related factors in electronic payment systems

Dennis Abrazhevich (IPO/TUE)

Abstract:
The explosive development of the electronic commerce in recent years makes the issue of how to pay for goods and services over the open networks very actual. The problem is that conventional ways of paying for goods and services do not work properly over the Internet. The problems are user unfriendliness, security, reliability and some more. The need for well-performing and user-friendly payment systems that would satisfy both consumers and businesses emerges clearly. In order to make a payment system successful, one should look at it from various points of view. Technical realization and implementation is one of the fundamental issues to be solved, but even if there are good technical solutions for the problem but they are not accepted by end users or vendors, the whole system fails. This research aims to clarify what of the aspects listed above are important from the points of view of interested parties, especially the end user, how to balance them in order to improve payment systems of the future. The aim of this project is to obtain and develop scientific and validated design knowledge for consumer-to-business payment systems in an e-commerce environment. The project will investigate factors that influence the user acceptance of payment systems.

ATR CALL: A computer assisted language learning system utilizing speech technology

Reiko Akahane-Yamada (Information Sciences Division, ATR International, formerly of ATR/HIP Research Labs, Kyoto, Japan)

Abstract:
We have developed a computer-based second language training system, ATR CALL, to help Japanese speakers to acquire the ability to perceive and produce American English (AE) speech segments. In this paper, we introduce our pronunciation training program in which we applied state-of-the-art speech technology. In the first stages of the training, we instructed the position and shape of the tongue when producing AE sounds by using movies of 3-D computer graphics to create a talking head with semitransparent image of the cheek. In the next stage, we conducted production training of English words, while giving feedback to learners whether or not they were able to pronounce the phonemes in each word adequately. To do so, we used evaluation scores generated by an HMM-based speech recognition system as feedback so that the learners could clearly understand the quality of their pronunciation in quantitative terms. It was found that this production training was quite effective, and trainees’ pronunciation ability had improved dramatically from pre-test to post-test. Various approaches to optimize second language speech training methods will be discussed in the context of the theories of speech perception/ production development.

Reactions of the public to energy from biomass

Tasmara van de Beld (TUE, TM, MTI)

Abstract:
As I just started last friday working as a PhD on this project, the poster will be about future research. The authors of the poster are Tasmara van de Beld, Anneloes Meijnders and Cees Midden. Abstract: This poster discusses an outline of research, which is just beginning, on the attitute and reactions by the public towards energy that has been extracted from biomass.

Human Error in Automation: Action Slips and the Influence of Strong Habits

Stephanie van den Berg (TM/TUE)

Abstract:

ToKeN2000: Accessibility and Enhancement of Knowledge in The Netherlands of 2000

D.G. Bouwhuis (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
This research project focuses on the use of information and communication technology which is becoming a crucial factor in the development of The Netherlands into a knowledge-based economy. Central in the project are (a) accessibility to that knowledge and (b) the manner and degree to which knowledge, contained and stored in modern information and communication systems, can be interactively disclosed and enhanced for its users.

Usability evaluation of interaction components

Willem-Paul Brinkman (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
Architectures for user-system interaction describe how user interfaces can be constructed in a modular way. Elementary interaction components (IC) form the basis for complex interaction components. These compounded ICs are used to create even higher-level ICs, on top of which eventually entire user interfaces are built. Usability evaluation techniques nowadays are still not able to make reliable and valid statements about the usability of the ICs of a user interface. An explorative experiment was performed to study variation in behaviour captured in a log file when users were asked to perform a task with a fictitious user interface that consists of ICs with different usability. The results indicate the number of messages received by an IC as an indicator of the usability of an IC. A second experiment is set up to validate this and to study whether counting messages instead of keystrokes might increase the power of usability evaluation of individual ICs. A third experiment will examine the effect other ICs in the user interface have on the usability of an IC.

The CREST Expressive Speech Processing Project

Nick Campbell (ATR/SLT Research Labs & ATR/ISD, Kyoto, Japan)

Abstract:
This research is to be jointly carried out at ATR, Kobe University, and NAIST, with the cooperation of Omron for appplication research and development. The goal of our project is to produce speech technology interfaces that are capable of recognising and generating speech which expresses the state of the speaker, with particular focus on emotional and attitudinal information.

Performing actions in virtual environments.

R. Custers (Department of Human-Technology Interaction, Eindhoven University of Technology)

Abstract:
Abstract: This poster forms an outline of a new research project which focuses on those elements in virtual environments that give the human observer the feeling of real presence and the source of actions that take place in the environment

Primary emotions in user-system-interaction

Arnout Fischer (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
Many contemporary guidelines consider user satisfaction as an important determinant of the quality of the interaction process. The interaction process consists of a sequence of interaction-cycles, which we consider to be information processing loops. In this project we aim to find a relation between interaction-process variables and satisfaction at the level of a single interaction-cycle. Emotions perform the necessary management of human resources. The emoti0n satisfaction is thought to be determined by the level of efficiency and effectiveness of goal achievement. Satisfaction encourages continuation of current actions. Subjective effectiveness is achieved when the desired distance to the goal is obtained. Subjective efficiency depends on the subjective time and the subjective amount of resources per interaction cycle. In a first experiment we will change the subjective efficiency by varying the delay time of the system. This alters the subjective duration of the interaction cycle. We will test our model of satisfaction and establish a relation between delay time and satisfaction. In further experiments we will look at other components of the interaction cycle and their combined effect on satisfaction.

Understanding the mechanism for visual perception: Toward a more natural visual display

Mahito Fujii (ATR Human Information Processing Research Laboratories, Kyoto, Japan)

Abstract:
We study the mechanism of visual perception, in order to identify and explore the factors involved in the perception of natural images. I will present results of studies we conducted on various mechanisms involved in visual perception, for example depth perception, color and motion.

MuSeLab, a platform for the study of human interaction with virtual environments

Dik J. Hermes and Martin Boschman (IPO/TUE)

Abstract:
Currently, a project is starting in which we will study human interaction with virtual environments. This project will be based on research carried out in a multisensory laboratory (MuSeLab). In the Muselab, virtual objects will be designed and presented to the user. The objects will be presented in three sensory modalities: the visual, the auditory and the haptic modality. These objects will be manipulated by the user in game-like and desk-top like motor tasks. In this way, we will create virtual environments, in which we can study the interaction cycle, consisting of sensory integration, cognition and motor responses.

Auditory perception of the size and speed of rolling balls

Mark Houben (Auditory perception of the size and speed of rolling balls)

Abstract:
In everyday life a complexity of sounds provides us with information about sound sources, their locations, and their surroundings. In the interaction with systems in their environment, this wealth of information may be of use to the user. In order to create suitable auditory interfaces we have to better understand how people perceive everyday sounds. We have chosen to study the sounds of rolling balls because the rolling ball can be used as a metaphor for cursor movement resulting from moving a mouse or a trackball. In three experiments the perception of recorded sounds of rolling balls was investigated. In the first experiment subjects were presented with pairs of sounds of rolling balls with different sizes and equal velocity and had to decide which of the two sounds was created by the larger ball. In the second experiment the velocity of rolling balls was varied and subjects had to choose the faster of two balls equal in size. The results showed that subjects are able to identify the size of rolling balls and that most subjects can clearly discriminate between rolling balls with different velocities. However, some subjects had difficulties in labelling the velocity correctly, probably due to lack of feedback about the correctness of their responses. In the third experiment both size and velocity varied. The results showed a clear interaction between the auditory perception of size and velocity of rolling balls. Investigation of acoustic cues affirmed this interaction effect by showing a conflict in available cues when varying both size and velocity.

ReachIn: A 3D force-feedback display technique for realistic interaction with virtual objects.

A.J.M. Houtsma (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
This demo shows how virtual objects can be visually observed in a 3-dimensional space, and can be manipulated in the same space through a hand-held stylus with complete preservation of natural reactive forces of resistance, inertia and stiffness.

Social Agents: A New Perspective on User-System Interactions

Yasuhiro Katagiri (Advanced Telecommunications Research Institute International, Kyoto, Japan)

Abstract:
People are not only rational/intelligent agents, but they are also endowed with various socially meaningful qualities: various types of interactional capabilities; outward appearances, age, gender, and social status; personalities and other dispositional characteristics. These social properties have been largely neglected in human-computer interactions research, but we believe these are indispensable in creating computer programs that are capable of smoothly communicating with people, and of being ultimately accepted as autonomous social entities in human everyday life. We are conducting both scientific observation/analysis research and constructive computational modeling research on human communication processes both in face-to-face setting and in media-rich environments, in order to elucidate what those socially meaningful qualities are, and how they work in communication.

Cursor Trajectory Analysis

Hilde Keuning (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
Target acquisition is greatly helped by tactual feedback. To create non-disturbing tactual feedback-effects in human-computer interaction the user's goal has to be predicted, so that the tactual feedback can help the user toward the target and away from non-targets. In this article we describe an experiment in which we explore cursor movements with an amplitude of 250 pixels, in eight different directions and with three different control devices (a mechanical mouse, an optical mouse and an optical trackball). The aim of this exploration was to find characteristics of the cursor path which could be used to create a prediction-algorithm. The focus was on the mean curvature of the paths and its variability. It can be concluded that on average cursor paths are rather straight in all eight directions and with all three devices. The variability of the curvature of the paths depends on three factors: (1) direction; the variability is higher in oblique directions than in horizontal and vertical directions (2) friction of the control device; the variability is higher when a low friction device used than when a higher friction device is used (the optical trackball having a lower friction than the two mice) (3) user. The amount of variability determines the maximum target resolution at which an algorithm can predict the user's target with a certain probability.

'Living Lab - Concept and Research Programme'

P.Markopoulos (IPO/TUE)

Abstract:
The Living-Lab is a planned research facility for the EUT, which is something between a usability laboratory and a normal residence. It will provide residence for the experimental subjects for periods of one or two weeks. During that time we shall be assessing how different technologies they use can become part of their dail life. The poster explains the concept of the Living Lab, provides some pointers to the research programme we want to 'house' inside this laboratory, and invites technology developers to use the Living Lab for testing their new products.

Computer Agent-mediated Human to Human Communication Systems

Kenji Mase (ATR/MIC Research Labs, Kyoto, Japan)

Abstract:
In this presentation, we give overview of "interface agent for communication support" research activities at ATR MIC. We are developing the context-aware personal agent system based on various activities of the user and interactions with the ubiquitous environment. The presentation would be a showcase of our research topics on this theme. It includes, PalmGuide (Personal exhibit guide system on PalmOS), AgentSalon (agent-agent dialog synthesis), SemanticMap (Community browser), AIDE (Augmented Informative Discussion Environment), Vista-walk (agent-guided gesture-based VR walk through), and so on. Also, a few research works on multi-media (graphical and musical) instrument will be prepared for presentation.

Recognition of distant-talking speech and acoustic environments using a microphone array and HMMs for integrated human machine interface

Satoshi Nakamura (ATR/SLI Research Labs, Kyoto, Japan)

Abstract:
Hands-free speech recognition is a very important issue for a natural human machine interface. The distant talking speech in real environments is distorted by noise and reverberation of the room. This paper describes characteristics of the room acoustical distortion and their influences on speech recognition accuracy. Then the paper tries to give a prospect of the solution based on previous studies and our research efforts. Especially a microphone array based-method and a model adaptation method are discussed. The microphone array can reduce the influences of the acoustical distortion by beam- forming. On the other hand, the model adaptation method can estimate the acoustical transfer function and adapt the speech models against the distorted observation signals. Furthermore, this paper also addresses robust speech recognition by incorporating audio-visual information.

Preferred Modalities in Dialogue Systems

Saskia te Riele (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
In recent years, there has been an increasing interest in adding other modalities to spoken dialogue systems. This is, for instance, due to (i) the fact that correcting speech recognition errors in a spoken mode is difficult and unpleasant for the user and (ii) the fact that for some tasks speech is not the best modality (e.g., for presenting lists). In this paper, we focus on the question how system characteristics influence the user's preferred input modality. The literature gives various different claims that are relevant for this question. For instance, Reeves & Nass (1996) offer their Rule of Matched Modality, which essentially states that users will give their input in the modality of the system's output. However, Oviatt (1999) claims that users have a strong preference for error-free input modalities. In particular, she claims that when giving input via one modality leads to problems, users switch to a less error-prone modality. Another relevant issue is the tendency of humans (observed by e.g., Zipf 1949) to opt for minimal effort. This implies that users will choose an input modality which requires the least effort. More concretely, keyboard input seems to be more efficient than speech input consisting of single words only if the required number of keystrokes is below about 3 (Baber, 1991). In this study, we investigate the trade-off between these three claims. For this purpose, a Wizard-of-Oz experiment was carried out in which four versions of a multi-modal dialogue system were compared. The system gave either spoken or written output, and the simulated recognizer either performed perfectly (no errors) or realistically (20% word error rate). These factors were crossed in a 2x2 design. For each version, 20% of the questions could be answered using maximally 3 keystrokes. Subjects could always give their input via speech or via keyboard. The dialogue system was presented as an automatic examinator to be used in elementary school. After a short training session in which both input modalities were introduced, the examinator took 7 exams of 10 questions each (about arithmetic, topography etc.). All questions primed subjects to give one-word answers. To keep motivation high and error tolerance low, a prize was awarded to the best performing subject. Results indicate that the principle of least effort plays first fiddle among the three aforementioned claims: subjects generally opted for spoken input irrespective of system output modality, especially when the answer consisted of four or more keystrokes. However, in the realistic ASR condition a significant number of subjects temporarily switched to keyboard input just to correct recognition errors. No evidence was found for the Rule of Matched Modality in this set-up. This is probably due to the strong effect of other factors related to the other two claims.

The MATIS project

Saskia te Riele (IPO/TUE)

Abstract:
The MATIS-project (Multi-modal Access to Transaction and Information Systems) aims at investigating the beneficial effects of multi-modality in the interface. In the framework of the MATIS project a prototype of a multi-modal train-table information system was build. To this end an already existing speech-only interface for an information service was extended with visual feedback. A preliminary experiment was conducted to examine the effects of providing visual feedback in terms of effectiveness and efficiency of the dialogue. In addition, user preferences were obtained as a subjective measure of usability. We found that the visual feedback did not improve effectiveness: the system with visual feedback did not lead to more succesful dialogues. However, there appeared to be an effect on efficiency: novice users obtained the requested information, i.e., the travel advice, about 10% faster, even though the time-consuming verification strategies that are necessary in unimodal systems were left intact. This advantage in speed was preserved in subsequent use of the speech-only (uni-modal) service by users who had used the multi-modal interface to start with. In addition, subjective measures showed that user clearly preferred the multi-modal system over the uni-modal system.

Comparison between physical and auditory parameterizations of speech corpus for the unit selection in text-to-speech synthesis

Minoru Tsuzaki (ATR/SLT Research Labs, Kyoto, Japan)

Abstract:
One of the important factors which affect the quality of synthesized speech by corpus-based text-to- speech (TTS) synthesis lies in how to estimate the distance between the target segment and its candidates in a corpus. In such TTS systems, speech segments closest to the targets are selected and concatenated to produce a sequence of speech sound. Although the final product of the TTS system is evaluated perceptually, the measurement of the distance has to be based on the physical parameters of speech signal. This is because there is no a priori perceptual distance for each pair of speech segments. This aspect will lead to a problem of the inconsistency between the physical distance and the perceptual distance. The problem might be assumed to be caused by two main causes, i.e., (1) inappropriate parameterization, and (2) inappropriate scaling. One may judge the physical parameterization is inappropriate if changes in one physical parameter results in changes in two (or more than two) independent perceptual attributes. One may also judge the scaling of physical dimension is inappropriate if the mapping function of the physical space to the perceptual space appears to be critically nonlinear. To assure a better insight into the appropriateness of the parameterization and the scaling, the signal analysis based on the standard LPC method and that based on the models of the peripheral auditory system are compared. The results of these analyses obtained for the CHATR speech database will be presented.

Spoken Language Interfaces in User-System Interaction

Raymond Veldhuis, Jacques Terken and Esther Klabbers (IPO/TUE)

Abstract:
This poster discusses the role of speech in user-system interaction, possibly in combination with other modalities. An overview of ongoing research projects is presented.

Human sensitivity to visual-haptic asynchrony

Ingrid Vogels (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
Many interfaces consisting of a visual display and a haptic interaction device do not stimulate the two modalities concurrently. Asynchronies between modalities can impede the completion time of manipulation tasks and decrease subjective telepresence. We investigated how sensitive human beings are to visual-haptic asynchronies when they touch a virtual wall. Especially, we studied whether action can influence the perception of synchrony. We found that: (1) changing the relation between the action of the user and the visual movement of an object by means of a (additional) time delay affects neither the sensitivity or the tolerance to visual-haptic asynchrony, (2) the sensitivity to visual-haptic asynchrony is deteriorated when the user manipulates the movement of the object him/herself and (3) the tolerance to visual-haptic asynchrony is subject dependent and can be asymmetric, i.e. the tolerances for 'vision first' and 'haptic first' can differ.

Trust and Control in Automated Information Systems

Peter de Vries (Department of Technology Management, Eindhoven University of Technology)

Abstract:
Governmental demands for environmentally friendly consumer products has encouraged the use of computer-based systems in household appliances. These systems are able to enhance energy conservation from the product side yet much energy is still wasted from the user side. By focusing on the interaction between the product and the user it is possible to generate responsible conservation behavior using eco-feedback. Eco-feedback is information presented during the product-user interaction which prompts the user to adopt energy-saving strategies, however, little is known about how such feedback works and thus how it can best be applied for optimal effect. Theoretical principles support the expectation that product-integrated feedback can create and/or support increased conservation behavior in the user. The primary goal of the project "Interactive Eco-feedback" is to identify the relevant fundamental principles governing human action and performance in response to immediate feedback regarding energy consequences of washing choices. The exploration and testing of various forms of feedback embedded in the interface of a washing machine has yielded information that can be generalized to other household appliances. The research presented will focus on the importance of seeing feedback as only one part of an interactive relationship with goals and behavioral options and how that relationship must be tailored to various user groups.

The Pen and Paper Visual Interaction Platform

Dima Aliakseyeu, Sriram Subramanian, Marina Vroubel (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
The Visual Interaction Platform (VIP) is a Natural User Interface (NUI) that builds on human skills of real world object manipulation and allows unhindered human-human communication in a collaborative situation. The existing VIP is extended into a new 3dVIP with extended functionality to support new kinds of interaction styles. Here we present our research focus in three specific directions: 1. In supporting 3D interaction for navigating and browsing multidimensional datasets, 2. In developing tools that support the process of early architectural design 3. Appropriately applying pen-input for early stage of design and for studying some aspects of interaction between pen and visual modalities.

Programming by Voice: a mission impossible?

Wieger Wesselink (IPO, Center for User System Interaction, Eindhoven University of Technology)

Abstract:
Due to the explosive growth of RSI in the last decade, more and more people have to rely on Programming by Voice (PBV) systems to do their programming jobs. The current speech recognition systems are not well prepared for PBV. Many additional voice commands are needed to make programming by voice feasible. Contrary to common belief, this is not an impossible mission. Promising results have already been achieved, that demonstrate that programming by voice is possible. This poster gives an overview of recent developments in PBV and of the recently started PBV-projects at IPO.

Information Visualization

Jack van Wijk (Department of Mathematics and Computing Science, Eindhoven University of Technology)

Abstract:
The computer graphics group of the faculty of Mathematics and Computing Science specializes in Information Visualization: The use of visual methods to convey large amounts of abstract information, such as tree-structures and graphs. Two new techniques will be presented. Cushion treemaps are a novel way to visualize large tree structures, such as computer directories. The preset-controller is a novel widget to navigate through large parameter spaces, on which a patent has been submitted. Two applications have been developed yet: Morphing of geometric shapes and the control of a sound synthesizer. Collaboration is sought on: - the use of perceptual knowledge to improve on and develop new visualization methods; - the evaluation of visualization methods; - application of new methods.