Welcome to the home page of AMME
AMME is an acronym for:
Automatic Mental Model Evaluator
Why did we develop AMME?
Rauterberg, M. (1995). About a framework for information and information processing of learning systems. In: E. Falkenberg, W. Hesse, A. Olive (eds.), Information System Concepts--Towards a consolidation of views (IFIP Working Group 8.1, pp. 54-69). London: Chapman&Hall.
Lankveld van G., Spronck P., Rauterberg M. (2008).
Difficulty scaling through incongruity.
In: M. Mateas, C. Darken (eds.) Proceedings of the Fourth Artificial
Intelligence and Interactive Digital Entertainment Conference (pp. 228-229).
Lankveld G. van, Spronck P., Herik J. van den, Rauterberg M. (2010). Incongruity-based adaptive game balancing. In: J. van den Herik and P. Spronck (Eds.): Proceedings of ACG 2009 (LNCS 6048, pp. 208–220), Berlin Heidelberg: Springer-Verlag.
What is AMME?
In an overview Ivory and Hearst (2001) compared 132 usability evaluation and modeling methods worldwide; 19 different modeling methods are based on logfile analysis: “AMME is the only surveyed approach that constructs a WIMP simulation model (Petri net) directly from usage data” (Ivory and Hearst 2001, p. 499).
Therefore they conclude, “AMME appears to be the most effective method, since it is based on actual usage” (2001, p. 502).
What can you do with AMME?
Measuring cognitive complexity:
Rauterberg, M. (1992). A method of a quantitative measurement of cognitive complexity. In: G. van der Veer, M. Tauber, S. Bagnara & M. Antalovits (eds.), Human-Computer Interaction: Tasks and Organisation--ECCE'92 (pp. 295-307). Roma: CUD.
Measuring behavioural complexity:
Rauterberg, M. (1993). AMME: an Automatic Mental Model Evaluation to analyze user behaviour traced in a finite, discrete state space. Ergonomics, vol. 36(11), pp. 1369-1380.
Measuring task complexity:
Rauterberg, M. & Fjeld, M. (1998). Task analysis in Human-Computer interaction - supporting action regulation theory by simulation. Zeitschrift für Arbeitswissenschaft, vol. 3/98, pp. 152-161.
Schluep S., M. Fjeld & M. Rauterberg (1998). Discriminating Task Solving Strategies Using Statistical and Analytical Methods. In: T.R.G. Green, L. Bannon, C.P.Warren & J.Buckley (eds.), Proceedings of Cognition and Co-operation ECCE-9 (pp. 121-126). Ireland: University of Limerick.
Measuring perceived complexity:
Rauterberg M. (1994). About the relationship between incongruity, complexity and information: design implications for man-machine systems. In: W. Rauch, F. Strohmeier, H. Hiller, C. Schlögl (eds.), Mehrwert von Information--Professionalisierung der Informationsarbeit (pp. 122-132). Konstanz: Universitätsverlag.
Rauterberg M. (1997). From novice to expert perceptive behaviour. In: P. Seppälä, T. Luopajärvi, C. Nygard, M. Mattila (eds.), Proceedings of 13th Triennial Congress of the International Ergonomics Association--IEA'97 (Vol. 7, pp. 521-523). Amsterdam.
Rauterberg M. (1999). Activity and perception: An action theoretical approach. In: Proceedings of the International Conference "Problems of Action and Observation"--PAO'97 (Amsterdam (Nl), April 1-4, 1997).
Measuring structural learning process:
Rauterberg, M. & Aeppli, R. (1995). Learning in man-machine systems: the measurement of behavioural and cognitive complexity. In: Proceedings of IEEE International Conference on Systems, Man and Cybernetics--SMC'95 (Vol. 5, IEEE Catalog Number 95CH3576-7, pp. 4685-4690). Piscataway: Institute of Electrical and Electronics Engineers.
Rauterberg, M. & Aeppli, R. (1996). How to measure the learning process in man-machine systems. In: A. Özok & G. Salvendy (eds.), Advances in Applied Ergonomics (pp. 312-315). West Lafayette: USA Publishing.
Rauterberg, M. & Aeppli, R. (1996). How to measure the behavioural and cognitive complexity of learning processes in man-machine systems. In: P. Carlson & F. Makedon (eds.), Proceedings of 'Educational Multimedia and Hypermedia'--ED-MEDIA'96 (pp. 581-586). Charlottesville: AACE.
Rauterberg, M. & Aeppli, R. (1996). Human errors as an invaluable source for experienced decision making. In: A. Mital, H. Krueger, S. Kumar, M. Menozzi & J.E. Fernandez (eds.), Advances in Occupational Ergonomics and Safety I (pp. 131-134). Cincinnati: International Society for Occupational Ergonomics and Safety.
Automatic creation of executable mental models:
Rauterberg, M., Schluep S. & Fjeld, M.(1998). Modelling of cognitive complexity with Petri nets: an action theoretical approach. In: R. Trappl (ed.), Proceedings of Cybernetics and Systems EMCSR'98 (Vol. 2, pp. 842-847). Wien: Austrian Society for Cybernetic Studies.
How do other researchers refer to AMME?
Chen Z., Caldwell-Harris C. (2019). Investigating the declarative‑procedural gap for the indirect speech construction in L2 learners.
Baumard P. (2017). Cybersecurity in France.
Pitkänen H. (2017). Exploratory sequential data analysis of user interaction in contemporary BIM applications.
Vedaprakash PG., Prakash PGO., Navaneethakrishnan M. (2016). Analyzing the user navigation pattern from weblogs using data pre-processing technique.
Fischer S., Itoh M., Inagaki T. (2015). Screening prototype features in terms of intuitive use: Design considerations and proof of concept.
Fischer S., Itoh M., Inagaki T. (2015). Prior schemata transfer as an account for assessing the intuitive use of new technology.
Schaffer S., Schleicher R., Möller S. (2015). Modeling input modality choice in mobile graphical and speech interfaces.
Halim Z., Baig AR., Zafar K. (2014). Evolutionary search in the space of rules for creation of new two-player board games.
Pohl M., Scholz F. (2014).
How to investigate interaction with
information visualisation: an overview of methodologies.
van Lankveld G. (2013). Quantifying individual player differences Tilburg: TiCC Ph.D.Series 25.
Halim Z., Baig AR., Hasan M. (2012). Evolutionary search for entertainment in computer games.
Halim Z., Baig AR. (2011). Evolutionary algorithms towards generating entertaining games.
Chandra P., Manjunath G. (2010). Measuring the interaction value of widgets.
Damasevicius R., Stuikys V. (2010). Metrics for evaluation of metaprogram complexity.
Halim Z., Baig AR., Fazal-ur-Rehman M. (2010). Evolution of entertainment in computer games.
Durfee A., Bacharach V. (2009). Linking utilization of text mining technologies and academic productivity.
Fischer S., Itoh M., Inagaki T. (2009). A cognitive schema approach to diagnose intuitiveness: An application to onboard computers.
Zhang Y., Li Z., Wu B., Wu S. (2009).
A spaceflight operation complexity measure
and its experimental validation.
Cowley B., Charles D., Black M., Hickey R. (2008). Toward an understanding of flow in video games.
Hardas M. (2008). An evaluation of the constructive teaching methodology of programming concepts.
Juvina I., Oostendorp H. van (2008). Modeling semantic and structural knowledge in web navigation.
Koca A. et al. (2008). Soft reliability: An interdisciplinary approach with a user–system focus.
Makany T., Kemp J., Dror IE. (2008). Optimising the use of note-taking as an external cognitive aid for increasing learning.
Maruster L., et al (2008). Analysing agricultural users’ patterns of behaviour: The case of OPTIRas™, a decision support system for starch crop selection.
Runge M. (2008).
Simulation of cognitive processes for automated usability testing.
Makany T., Engelbrecht PC., Meadmore K., Dudley R., Redhead ES., Dror IE. (2007). Giving the learners control of navigation: cognitive gains and losses.
Maruster L., Faber N. (2007). A process mining approach to analyse user behaviour.
Mosqueira-Rey E., et al. (2007). An evolutionary multiagent system for studying the usability of websites.
Nadeem D. (2007). Cognitive aspects of semantic desktop to support personal information management.
Ritter F, Nerb J., Lehtinen E. (2007). Getting things in order: Collecting and analysing data on learning.
Xing J. (2007). Information complexity in air traffic control displays.
Freeman M., Norris A., Hylander P. (2006).
Usability of online grocery
systems: a focus on error.
Oxford, R. (2006). Task-based language teaching and learning: an overview.
Reeder R.W., Maxion R.A. (2006).
User interface defect detection by
Schlick C.M., Winkelholz C., Motz F., Luczak H. (2006). Self-generated complexity and human-machine interaction.
Dillon KM., Talbot PJ., Daniel Hillis W. (2005). Knowledge visualization: Redesigning the human-computer interface.
Eraslan E. and Kurt M. (2005). A fuzzy multi-criteria analysis approach for assessing the performance of modern manufacturing systems.
Kaber DB. et al (2005).
Adaptive automation of human-machine system
Kontogiannis T. (2005). Integration of task networks and cognitive user models using coloured Petri nets and its application to job design for safety and productivity.
Mitchell WJ., Casalegno F. (2005).
Xing J., Manning CA. (2005). Complexity and automation displays of air traffic control: Literature review and analysis.
Herder E., Juvina I. (2004). Discovery of individual user navigation styles.
Lee Y. (2004). Student
perceptions of problems' structuredness, complexity. situatedness, and
information richness and their effects on problem-solving performance.
Xing J. (2004). Measures of information complexity and the implications for automation design.
Maple C. et al. (2003). A visual formalism for graphical user interfaces based on state transition diagrams.
Mosqueira-Rey E. et al. (2003).
An evolutionary multiagent system
for studying the usability of websites.
Ritter F. et al (2003). Techniques for modeling human performance in synthetic environments: A supplementary review.
Herder E. (2002). Metrics for the adaptation of site structure.
Weibelzahl, S. (2002). Evaluation of adaptive systems.
Gillan DJ., Cooke NJ. (2001).
Using Pathfinder network to analyze
procedural knowledge in interactions with advanced technology.
Ivory MY. (2001). An empirical foundation for automated web interface evaluation.
Weibelzahl S., Weber G. (2001).
Mental models for the navigation in
Jung RM., Willumeit H. (2000). Objective evaluation of the complexity of usage for car infotainment systems.
Weibelzahl S., Weber G. (2000).
Evaluation adaptiver Systeme und
Sifaqui C (1999). Structuring user interfaces with a meta-model of mental models.
Wen CH., Hwang SL. (1999).
A graphic modeling and analysis tool for human
fault diagnosis tasks.
Booth JF. (1998). The user interface in computer-based selection and assessment: applied and theoretical problematics of evolving technology.
Davis JS (1998). Active help found beneficial in wizard of oz study.
Ellis RD, Jankowski TB, Jasper JE, Tharuvai BS
(1998). Listener: A tool for client-side
investigation of hypermedia navigation behavior.
Diaz JL. (1997). A
patterned process approach to brain, consciousness, and behavior.
Dutke S (1994). Error handling: visualisations in the human-computer interface and explorative learning.
How does AMME look like?
The interactive process of a user with an interactive system can be automatically recorded and analysed with different programs. Most of these programs do not allow a structural analysis of logfiles. To overcome this obstacle, we developed the program AMME. To analyse your data with AMME, you need the following two things:
an interactive system (e.g., computer program) with an automatic logfile recording facility;
the AMME tool to transform the recorded logfiles in a form that can be further analysed.
The tool kit around AMME is partly shareware and consists of the following programs; additionally, we do offer the links to the commercial products:
Postscript interpreter software
To run the logfile examples for the chosen interactive database management system, you need the system description file (struct.str) as a first input file for AMME. To give you executable examples for AMME, you can download two example logfiles (log-1.log; log-2.log) and have a look at the corresponding protocol files (log-1.pro; log-2.pro), in addition you will get the automatically generated postscript files (log-1.ps, log-2.ps), which can be used as a starting point for the Petri net simulation.
You should get the following output files: log-1.pro, log-2.pro, log-1.mkv, log-2.mkv, log-1.ptf, log-2.ptf, [don't open, just print or save log-1.ps, log-2.ps].
contains a list with the most important bugs and a short description how
to fix them. AMME can only be used to analyse logfiles generated by an interactive
system, that can be described with a finite list of all observed
states and transitions beforehand.
We could fix one obstructive bug in AMME (version 1.0), see updated bug list in AMMEreport for version 1.1.
If you want to get access to the source code of AMME, please write an email.
COPYRIGHT: Permission to make digital or hard copies of portions of the content of this website for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation of this source.
DISCLAIMER: We are not at all responsible for any content of any link that goes beyond the scope of this website.