by Long, Landy, Rowe and Michiels
Summary
Gestures are versatile, powerful, efficient, and convenient, but users often find gestures difficult to remember and misrecognizes gestures. The authors are developing a tool to assist pen-based UI disigners in creating and evaluating gestures for pen-based UIs by advising them about how to improve their gesture set. The current work is an investigation into gesture similarity.
Next, related work is showed, Apple Newton MessagePad popularized pen input, it used single-stroke and iconic gestures to recognize text. 3Com PalmPilot recognizes special command strockes. Other examples are music, drawing, air-traffic control, searching, etc. About perceptual similarity: the logarithm of quantitative metrics was found to correlate with similarity. However, similarity metrics vary between people and different stimuli. Multi-Dimensional Scaling (MDS) reduces the number of dimensions of a data set so that patterns can be expressed by a 2-3 dimensional plot. Several issues in using MDS are: (1) how to use data from multiple participants (INDSCAL takes as input a proximity matrix for each participant and takes individual differences into account) (2) how many dimensions to use in analysis (no more than 1/4 number of stimuli) (3) how to measure distance (Euclidean distance) (4) assign meaning to axes (a. inspecting plots of the stimuli b. correlation with measurable quantities).
Then a gesture similarity experiment is described, 2 data sets and different subjects were used in 2 trials:
Trial 1, gesture set was designed by Long to span a widerange of possible gesture types and to have differences in orientattion. 21 participants involved and were asked to select the gesture seemed most different among 3 gestures picked from the gesture set, all combinations were showed to each participant exactly once. The goals of the analysis were: (1) determine measurable geometric properties that influence perceived similarity of gestures (by using plots that showed inter-gesture dissimilarities by Euclidean distances) (2) produce a model that predict the perceived similarity of given gestures (by running regression analysis that produced weights indicating how mush each feature contributed to the similarity). A model of gesture similarity that correlated 0.74 with the reported gesture similarities was derived.
Trial 2, designed three new gesture sets (1) explore the effect of total absolute angle and aspect (2) explore length and area (3) explore rotation-relates features, to test the predictive power of their model for new people and gestures. Participants were asked to do the same thing as in trial 1. Again, this trial was to determine what features were used for similarity judgements and to derive a model for predicting similarity. Also, predicted similarities by using model derived in trial 1 were compared with similarities reported by participants in trial 2. The result was not so satisfactory as that in trial 1. The derived model predicts the reported gesture similarities with correlation 0.71.
The correlation between the prediction of trial 1 and the data from trial 2 was 0.56, with the other way around 0.51.
The results show that human perception of gesture similarity is complicated, creating multiple models with more data might help.
Discussion
The inevident predicting power of models derived from trials may due to the design of gesture sets and the features chosen. Both factors are intuitively determined by the authors, which may not fit the perfect model for predicting gesture similarity. Another possibility is that there is no such perfect model that can accurately predict perceived similarities of any given gestures for anyone at all. After all, human perception of similarity is so complicated.
Subscribe to:
Post Comments (Atom)

No comments:
Post a Comment