top
Home
Bio
Revolution!
Classes
Previous Classes
What I do in my Free Time
Links
 

G. Annotated Bibliography

 Betts, Bill, David Burlingame, Gerhard Fischer, Jim Foley, Mark Green, David Kasik, Stephen T. Kerr, Dan Olsen, James Thomas.  Goals and Objectives for User Interface Software, Computer Graphics, v21, no 2,1987

 Bournique, Richard & Siegfried Treu.  Specification and Generation of Variable, Personalized Graphical Interfaces.  International Journal of Man-Machine Studies, 22, 663-684, (1985).

 Some high-level goals of a user interface language are presented.  However, the definitions do not include sufficient information for me to operationalize them.  Then a BNF based language description is described, and a way of implementing it is described.  The language is used by an experimenter to implement interfaces which are then used by subjects.  Subjects are tested.  Subjects cannot personalize their own interfaces.   A poor summary of a possibly good dissertation.

 Brown, John Seely, Richard R. Burton & Alan G. Bell.  SOPHIE:  A Step Toward Creating a Reactive Learning Environment.  International Journal of Man-Machine Studies, 7, 675-696, (1975).

 Interesting, but not particularly relevant to adaptive user interfaces.  Interface does spelling correction, run-on word correction, ellipsis, and pronoun dereferencing, but there is no adaptation, except keeping track of unparsable inputs.

 Card, Stuart, Tom Moran, and Alan Newell.  The Keystroke-level Model for User Performance Time With Interactive Systems.  Communications of the ACM, 23(7), 396-410, (1980).

 Clowes, I., I. Cole, F. Arshad, C. Hopkins, and A. Hockley.  User Modelling Techniques For Interactive Systems in People and Computers:  Designing the Interface (eds. P. Johnson and S. Cook).  Cambridge University Press, New York, 1985.

 A literature survey on user models.  They propose a breakdown of user models based on location of the model in the system.

         1.  None (e.g. autopilot)

         2.  Backend (e.g. heartbeat monitor which flags abnormalities)

         3.  Frontend (e.g. computer aided instruction and interactive front ends)

      This distinction is neither particularly obvious nor particularly useful.  Then several systems that use user models are briefly mentioned:  WURSOR, GUIDON, GRUNDY, UMFE, DEBUGGY.  Several research directions are proposed:

         AI - planning and goal recognition, discourse modelling, belief systems

         CogSci - theories of learning, mental models

 Cohen, Ellis S., Edward T. Smith, and Lee A. Iverson "Constraint-Based Tiled Windows", Proceedings: 1st International Conference on Computer Workstations. IEEE Computer Society, 1985

 A description of the RTL window system.

 Cohen, Ellis S., A. Michael Berman, Mark R. Biggers, Joseph C. Camaratta, Kevin M. Kelly. Automatic Strategies in the Siemens RTL Tiled Window Manager. Proceedings: 2nd IEEE Conference on Computer Workstations.  IEEE Computer Society, 1987

 Cohen, Paul R. and Edward A. Feigenbaum, (eds.) The Handbook of Artificial Intelligence, vol III.  Los Altos, California: Kaufmann. 1982.

 Croft, W. Bruce.  The Role of Context and Adaptation in User Interfaces.  International Journal of Man-Machine Studies, 21, 283-292, (1984).

 Describes two systems: POISE, which can be "trained" (by using a graphical programming language), and a document retrieval system which adapts by changing weights on features on an ASN (not explained here).  No real connection between systems.

 Evans, T. G. "A Program for the Solution of Geometric-Analogy Intelligence Test Questions" in Semantic Information Processing, M. Minsky (Ed.), MIT Press, Cambridge, Mass., 1968.

 Foley, James D., Victor L. Wallace, and Peggy Chan.  The Human Factors of Computer Graphics Interaction Techniques, IEEE Computer Graphics and Applications, Nov 4, 13-48, 1984.

 Gaines, B.R.  Axioms for Adaptive Behaviour.  International Journal of Man-Machine Studies, 4, 169-199, (1972).

 How to define "adaptive"

         First, define a "task" as some segmentation of the interaction between the controller and the environment.  For any task, it must be possible to say whether the controller has performed satisfactorily.  For metatheoretic reasons, the set of tasks should be chosen so that the segmentation of an interaction into tasks is unique, and the tasks should give adequate information about the aspects of behavior that are of interest.

         An acceptable interaction is one that the controller eventually always satisfies.

         Adapted - an adapted controller immediately performs acceptably on interactions consisting of repetitions of a single task.

         Potentially adaptive - a controller that will have an acceptable interaction with any one of a set of tasks is potentially adaptive to that set.

         Compatibly adapted - a controller is compatibly adapted to a set of tasks if it is adapted to one task and potentially adaptive to the set.  Note;  in adapting to a new task, the ability to readapt to the previous or any other may be lost.

         Compatibly adaptive - above, but remains potentially adaptive to entire set.  Note:  not necessarily simultaneously.

         Jointly adapted - given any sequence of tasks from set, it remains adapted to every member of set.

         Jointly adaptive - compatibly adaptive to set (i.e. can learn entire set, one at a time), and becomes jointly adapted to entire set during acceptable interaction with any task (learns entire set from one task.)  I don't understand this one. How can one task train the controller for another?

 Goldberg, David E. Genetic Algorithms in Search, Optimization, and Machine Learning.  Addison-Wesley, 1988

 A clear explanation of the basic concepts of genetic algorithms and learning classifier systems.  Includes examples which can be generated by hand, as well as interesting exercises for the computer.  Also includes a summary of known application.  Well worth reading.

 Goldberg, David E.  The Genetic Algorithm Approach: Why, How, and What Next?  Adaptive and Learning Systems, K. S. Narendra, Ed. Plenum, 1986.

 Greenberg, S. and Witten, I. H.  Comparison of Menu Displays for Ordered Lists.  Proceedings of the Canadian Information Processing Society National Conference, Calgary, Alberta, May, 1984.

 Greenberg, Saul & Ian H. Witten.  Adaptive Personalized Interfaces-A Question of Viability.  Behavior and Information Technology, 4, 1, 31-45, (1985).

 An existence proof that a self-adapting menu system increases performance.  Claim is made that each user experiences increase, but statistics are for average only.  Experiment is adaptive menu system for telephone dialer, where adaptation is based on past frequency, and involves changing the menu hierarchy.

 Hancock, P.A., M.H. Chignell & A. Loewenthal.  An Adaptive Human-Machine System.  IEEE 1985 International Conference on Cybernetics and Society, 627-629, 1985.

 Discusses requirements of a system which adapts to human user (treated as a servomechanism) by measuring Mental Workload (MWL) and changing task when it goes outside of allowed bounds.  Briefly discusses relevant differences between human mechanisms and machines.  Shows no examples, nor does it say anything convincing about their ability to make this system work in practice.  Very disappointing.

 Hayes-Roth, Frederick, Donald A. Waterman, and Douglas B. Lenat.  Building Expert Systems.  Reading, Mass.: Addison-Wesley, 1983.

 Holland, John, Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, 1975.

 Holland, John. (1986) Escaping brittleness: The possibilities of general purpose machine learning algorithms applied to parallel rule-based systems.  In R. S. Michalski, J. G. Carbonell, and T. M. Mitchell (eds.) Machine Learning: An artificial intelligence approach, vol 2.  Los Altos, California: Kaufmann.

 Holland, John, Keith Holyoak, Richard Nesbitt, and Paul Thagard. Induction: Processes of Inference, Learning, and Discovery.  MIT Press, 1986.

 A basis for a general theory of induction.  Describes learning classifier systems, but doesn't go into the mathematical analysis (for which, see Holland 1975).  A gold mine of good information and theories.  See separate notes.

 Holland, John and J. Reitman (1978).  Cognitive Systems Based on Adaptive Algorithms.  In D. Waterman and F. Hayes-Roth (eds.), Pattern-directed  Inference Systems.  New York: Academic Press, 1978.

    Innocent, P.R.  Towards Self-adaptive Interface Systems.  International Journal of Man-Machine Studies, 16,287-299, (1982).

 Presents the idea of a self-adaptive system, which necessitates a "soft facade" (another name for a UIMS?).  Brings up some potential problems:  stability.

 This appears to be a paper with little firm content.  When I tried to pin down precisely what he was telling me, I came up with very little in the first three sections. Section 4 was totally blue-sky.

 Langston, Diane, and Dennis Grantham, eds.  Introducing the Andrew Toolkit, 1988.

 Lenat, D. B. 1976.  AM: An artificial intelligence approach to discovery in mathematics as heuristic search.  (Doctoral dissertation.  Reprinted in R. Davis and D. B. Lenat. 1980.  Knowledge-based systems in artificial intelligence.  New York: McGraw-Hill.)

 Lenat, D. B. 1977.  On automated scientific theory formulation: A case study using the AM program.  In J. E. Hayes, D. Michie, and L. I. Mikulich (Eds.), Machine Intelligence 9.  New York: Halsted Press, 251-286.

 Lenat, D. B. EURISKO: A program that learns new heuristics and domain concepts.  The nature of heuristics III: Program design and results.  Artificial Intelligence, Mar 1983

 Macmillan, Stuart.  Knowledge Acquisition for a Personal Agent.  IEEE 1985 International Conference on Cybernetics and Society, 736-740, 1985.

 Another poor summary of a dissertation.  Not enough detail to figure out what he really did.  Examples were so terse as to be incomprehensible.  He proposed that the Personal Agent be composed of many cooperating experts (Personal Knowledge Systems), but didn't explain how they cooperated, or what they did in cases where two disagreed.  He said he talked about how to trigger changes to the system, but it was so superficial that I didn't get anything out of it.

 Minsky, M. & S. Papert, Perceptrons, MIT Press, Cambridge, Mass., 1969.

 Myers, Brad.  Issues in Window Management Design and Implementation, in Methodology of Window Management, Hopgood, F. R. A. ed., Springer-Verlag, New York, 1986.

 Rhyne, Jim, Roger Ehrich, John Bennett, Tom Hewett, John Sibert, Terry Bleser.  Tools and Methodology for User Interface Development.  Computer Graphics, v21, no 2, 1987.

 Rich, Elaine.  Users Are Individuals:  Individualizing User Models.  International Journal of Man-Machine Studies, 18, 199-214, (1983).

 Types of user models may be characterized along three dimensions:

         1.  Single model canonical user vs. models of individual users

         2.  Specified explicitly vs. inferred

         3.  Long term vs. short term characteristics

 Short description of help system for scribe, which models individual users on two dimensions:  knowledge of scribe, and knowledge of system related concepts.  Each command is related to concepts, and concepts are rated.  An explanation is given in terms of concepts rated at the users level.  Users level is determined by concepts s/he uses to pose questions, and changed when next question shows non-comprehension of previous.

 Short description of Grundy, which recommends books based on stereotypes of users, and which changes values and confidence of facets of those stereotypes based on rejections of those suggestions.

 Rich, Elaine.  Artificial Intelligence, New York: McGraw-Hill, 1983.

 In an otherwise good discussion of learning, she makes the statement about neural nets that "If you start from nothing, you will get only a short way away from nothing" and dismisses the field entirely.  She should know better than to make such rash statements.

 Roach, J. & Wilding, M.  Adapting to Individual Users:  The User Trainable Interface.  IEEE 1985 International Conference on Cybernetics and Society, 228-235, (1985).

 User could stop program (simulation of aircraft carrier air traffic control) at any point and change the interface.  Change applied to any time simulations was in same state (state variables were number of planes, presence of emergency, number of missed approaches).  User could change locations of various areas, color and audio output.  Changes were made by menu choices and locator picks.  Note:  change menu is not trainable.  Interface specification implemented as Prolog rules.  No validation. It is not clear how generalizable these techniques are.

 Rumelhart, David E. and James L. McClelland (Eds.). Parallel Distributed Processing: Explorations in the Microstructures of Cognition.  Cambridge, Mass: MIT Press. 1986.

 The bible of connectionism, but a very difficult read.  I still have never understood how networks are supposed to be trained.

 Samuel, Arthur.  Some Studies in Machine Learning Using the Game of Checkers.  IBM Journal of Research and Development, 3, 210-229, (1959).

 Sayers, Dorothy L.  Busmans Honeymoon. Harmondsworth, England: Penguin, 1937.

 This, like her other Lord Peter Wimsey novels, draws a striking portrait of Merwyn Bunter, the perfect butler.  The perfect gentleman's gentleman, Bunter exhibits many of the fine qualities of the top sergeant he was in the war.  He knows how to accomplish virtually any task asked of him; nothing is too menial, little is beyond his ken.  (See pages 58 and 186)  Like a true top sergeant, he allows the officers the total responsibility for what to do and why, himself being responsible for the how.  Initiative remains totally with Lord Peter unless it has been implicitly or explicitly delegated.  The significant exceptions are cases where Bunter is possessed of relevant information unknown to his master.  Then, in spite of direct orders to the contrary, he will do what he believes Lord Peter would ask for.

 Bunter is, to me, the prototype of the perfect computer.

 Senay, Hikmet.  A Knowledge-Base Approach to Design Intelligent Interfaces.  Ph.D. dissertation, Syracuse University, 1987.

 Sleeman, D.  UMFE:  A User Modelling Front-End Subsystem.  International Journal of Man-Machine Studies, 23, 71-88, (1985).

 Types of user models characterized by nature and form of information contained and type of inference engine needed.

         1.  Scalar - single number describes user.  e.g. KLM

         2.  Ad Hoc - e.g. SOPHIE - what readings user has taken from circuit

         3.  Profile Models - e.g. GRUNDY

         4.  Overlay Models - users competence is subset of experts. Difference from profile is use of topics that user intends to acquire (What is real difference? How does modeler know intent?)

         5.  Process Models - specification is executable, does not depend on specialized inference engine, e.g. BUGGY (doesn't seem like much difference to me.)

 (Critique of Samuel (63) in Waterman (70))

 UMFE has a detailed user model, including inference rules to decide which concepts the user is likely to know.  First, UMFE asks the user whether s/he knows a certain concept, then infers others from explicit inference rules and implicit use of difficulty ratings.

 When modelling users, you need to address assumptions made about the users.  UMFE assumes:  1.  Users know what they know; 2.  Users knowledge is stable and context independent; 3.  about internal structure;  4. concepts can be totally ordered in difficulty, and users level can be known; 5.  the main user will understand explanations better if difficult concepts are omitted.

 No experimental validation presented.

 Smith, J. Jerrams.  SUSI - A Smart User-system Interface in People and Computers:  Designing the Interface (eds. P. Johnson and S. Cook),  Cambridge University Press, New York, 1985.

 Describes the design goals of a marvelous system which will save the world for (or from) Unix.  However, only a very small part has been implemented, and that part not experimentally verified.  Many details necessary for understanding what has been implemented are left out.  From what was presented, I would not predict eventual success.

 Totterdell, Peter & Paul Cooper.  Design and Evaluation of the Aid Adaptive Front-end to Telecom Gold in People and Computers:  Designing for Usability (eds. M.D. Harrison and A.F. Monk),  Cambridge University Press, New York, 1986.

 A good attempt, but a negative result.  Design was simplified so much that users couldn't do any work with it.  Incorrect inferences interfered with performance.  What can be learned from this?  1.  They haven't learned how to infer plans from actions.  2.  They hadn't learned how to make effective help messages.  General message:  designing experiments without a good theory is often futile.

 Trevelyan, Robert and Browne, Dermot P. A Self-Regulating Adaptive System, in Proceedings of CHI+GI 1987 (Toronto, April 5-9), ACM, New York,  pp 103-107, 1987.

Waterman, D. A. Generalization learning techniques for automatizing the learning of heuristics.  Artificial Intelligence, 1, 121-170. 1970

Winston, Patrick Henry, "Learning Structural Descriptions from Examples" in The Psychology of Computer Vision, P. H. Winston (Ed.), McGraw-Hill, New York, 1975.

Winston, Patrick Henry, "Learning and reasoning by Analogy," Communications of the A.C.M., vol 23, No. 12, pp 689-703, Dec. 1980.

Winston, Patrick Henry, Artificial Intelligence, 2nd edition.

He presents several ways of learning with a teacher.  Only way presented of learning without a teacher is Lenat, which used interestingness as a value function.  The role of a teacher is to select training examples, both correct examples and near misses.

 He quotes Martins Law: "You can't learn anything unless you almost know it already", attributed to William A. Martin.

 NeWS manual from Sun

Windows and Window Based Tools: Beginners Guide, Sun Microsystems, 1987

 

 

This website's first version is by Ryan Knowles and is maintained by Pat McGee.