An approach for designing and evaluating a plug-in vision-based tabletop touch identification system Andrew Clayphan, Roberto Martinez-Maldonado, Christopher Ackad, Judy Kay School of Information Technologies University of Sydney, NSW 2006, Australia [email protected], [email protected], {christopher.ackad,judy.kay}@sydney.edu.au with creating our touch identification mechanism, we established a systematic procedure for testing the accuracy of such systems.

ABSTRACT

Key functionality for interactive tabletops to provide effective collaboration affordances requires touch identification, where each touch is matched to the right user. This can be valuable to provide adaptive functions, personalisation of content, collaborative gestures and capture of differentiated interaction for real-time or further analysis. While there is increased attention on touch-identification mechanisms, currently there is no developed solution to readily enhance available tabletop hardware to include such functionality. This paper proposes a plug-in system that adds touch identification to a conventional tabletop. It also presents an analysis tool and the design of an evaluation suite to inform application designers of the effectiveness of the system to differentiate users. We illustrate its use by evaluating the solution under a number of conditions of: scalability (number of users); activity density; and multi-touch gestures. Our contributions are: (1) an offthe-shelf system to add user differentiation and tracking to currently available interactive tabletop hardware; and (2) the foundations for systematic assessment of touch identification accuracy for vision-based systems.

Interactive tabletops are systems that are distinguished from other more widely used devices, such as desktop, laptop and personal computers, by offering a horizontal, and relatively large, display interface suitable for group work through which users can directly interact with digital information, or even tangible objects, while keeping mutual awareness and direct communication [22]. Interactive tabletops are good examples of “disappearing computers” [29], from the perspective of “mental disappearance”, because they convey the impression that the computer has been taken out of their usual container in the environment [22]. From the users’ point of view, tabletops allow users to interact directly with objects instead of using the keyboard or mouse, and allow users to be aware of others’ actions. Users can combine advantages of the physical setting provided by a traditional around-the-table meeting with the possibilities that a digital environment can offer [27]. In spite of the potential value of interactive tabletops to facilitate collaboration and group work, most tabletop hardware cannot directly distinguish users from multi-touch input, with the notable exception of the DiamondTouch [6]. Streitz et al. [29] envisioned that an intelligent ubiquitous environment should collect contextual data and aggregate this data to provide contextually appropriate information, in an intuitive and unobtrusive way. This is so users can comprehend it easily, for guidance and subsequent actions determined by the users themselves and to keep control of what to do next.

Author Keywords

Evaluation, Design, User differentiation, Interactive tabletops. ACM Classification Keywords

H.5.2. Information Interfaces and Presentation: User Interfaces – Graphic User Interfaces

A number of potentially valuable tabletop affordances require touch identification, where each touch is associated to the correct user. This is valuable for providing adaptation of group activities [14], personalisation of content or the user interface [24], collaborative gestures [20, 28], enforced turn-taking [7], analysis of social territoriality [23, 30], and scrutiny of group dynamics [8, 10, 16, 23]. This is different from user identification, which refers to the affordance of the tabletop system to associate each touch to the user who performed it, and the user is recognised or authenticated into the system. For this to be complete it requires a way to track touches performed by users after they have been identified (authenticated).

INTRODUCTION

There are benefits in tabletop systems with touch identification, meaning that the system can link each touch to the user who performed it. This functionality however, is not available on most tabletop hardware today. To address this, we have created a solution that can be used with arbitrary tabletop hardware to provide touch identification. In conjunction

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. OZCHI ’13, November 25–29 2013, Adelaide, Australia Copyright 2013 ACM 978-1-4503-2525-7/13/11...$15.00.

There has been increased attention on touch-identification mechanisms: using physical devices that are attached to users’ hands, pens, sensors or vision-based systems. These

373

connecting with sensors placed on their chairs with the table, identifying the parts of the table each user is touching. This system provides very accurate user differentiation since, ideally, there is little chance of a mistaken touch given that users’ bodies are used to establish an electrical circuit. However, in order to obtain accurate user identification, the natural movement of people around the tabletop is restricted, forcing users to remain seated. This mechanism fails if a user gets up off the chair, if users change seats, or if a person performs touches while maintaining physical contact with another (seated) person, whereby the DiamondTouch will register touches under the other persons’ identity. A touch may be missed if a gesture causes shadowing of fingers (overlap of fingers) or if the user has two fingers that are very close together. Even though this approach has worked well under controlled conditions, the accuracy of this system is not stated.

mostly depend either on obtrusive hardware or specific touch technology. However, there are no developed solutions to readily enhance available tabletop hardware to include such functionality. Moreover, it is surprising that the accuracy of touch-identification systems has not been systematically reported. This makes it hard for a designer to be aware of the interaction they can expect a system to support; the means to adapt systems in other interactive tabletops; and deployment in the wild. This paper presents a plug-in system that adds touch identification to a conventional tabletop irrespective of the touch technology that the hardware relies upon, based on the following design goals: real-time operation; scalability; support for multiple users; seamless; and low cost. An analysis tool is presented with the design of the evaluation suite to inform application designers of the effectiveness of the system to differentiate users. We validate the use of our system by evaluating the implementation under a number of conditions, including: (1) scalability (number of users); (2) activity density (amount of touches for a range of tabletop applications) and (3) collaborative gestures (simultaneous multi-user interaction). The contributions of this paper are: an off-theshelf system that can be used for adding user differentiation and tracking to currently available interactive tabletop hardware; and the foundations for systematic assessment of touch identification accuracy for vision-based systems.

Some of limitations of the DiamondTouch have been addressed by Berard [2] with Gaussian models of touch responses to reduce shadowing of touch locations with two or more fingers. These authors evaluated the DiamondTouch with four classes of two-finger gestures likely to cause ‘shadowing’: where two fingers crossing in opposite directions resulted in accuracy levels as low as 15%. With their models, they were able to raise the lowest accuracy by a factor of 4. A second effective and accurate way for interactive tabletop hardware to afford user differentiation is by using small pieces of hardware that people can grab or wear on their hands or arms. One such approach is the use of pen pointers. Some examples include work by Collins [5] and Kharrufa [10]. The first system adapted Mimio pens,1 which are commonly used on vertical whiteboard devices, to build a non-multi input pen-based tabletop using overhead projection. Similarly, the second system adapted a vertical system, called Promethean Activboard,2 which is a pen-based device which allows for multiple and differentiated input.

This paper is organised as follows. The next section summarises previous work on touch and user identification at tabletops. We then explain the design of our plug-in touch identification system. Following we explain the design of our evaluation suite, and how the evaluation suite was used to design tests for a particular tabletop. We report how the evaluation was conducted, present results from our system and then discuss these. We conclude with directions for future work. BACKGROUND

One aspect of interactive tabletop research that has received increased attention is the possibility of using these devices to build context aware systems, which can provide richer and adapted experiences. This section explores the current mechanisms available for touch identification and user identification. For each, we show the implications of applying them in authentic environments, their evaluation and their level of obtrusiveness.

Continuing with hardware, IdWristbands [19] used a series of infrared LED’s, placed on a wristband worn by the user that transmitted a specific blinking pattern in order to associate touches with the corresponding users’ arm. Marquardt et al. [13] developed fiduciary tagged gloves that enabled a tabletop to both distinguish between one person’s or multiple peoples’ hands, and more specifically, to identify the parts of a hand touching the surface, including users’ fingertips, knuckles, palms, sides or backs of the hand. Medusa [1] was a proximity-aware multi-touch tabletop that used a network of 138 proximity sensors placed on the edges of the tabletop to detect a user’s location, arm direction, the particular hand being used, and associate touches to specific users and hands.

Touch Identification

Touch identification refers to the capability of the tabletop system to associate each touch on the interactive surface with the user who performed that touch. This is also called user differentiation – as the system knows there are a number of different users, but does not know the identity (name or user model) associated with each user. Most tabletop solutions in this review fall into this category.

Even though pens, gloves, and wristbands offer great affordances for user differentiation in horizontal tabletop interactive systems, they can impose a number of limitations on the interaction space. In the case of pens, these have proved to be effective when the activity performed on the tabletop requires

The most common system for touch identification in the community of tabletop research is the DiamondTouch [6]. This can distinguish users by transmitting electrical signals to different locations on the table. When a user touches the interactive surface these signals are coupled with the users bodies,

1 2

374

http://www.mimio.com http://www.prometheanworld.com

is unclear which forms of interaction are well accommodated with touch-identification, and which are not.

higher level of input precision, but the natural way of directtouch interaction is better in terms of efficiency and user preference [4]. In the case of wearable gadgets, these are not practical in real settings such a classroom or walk-up-and-use scenarios.

In summary, there are several classes of touch-identification mechanisms, each inherently prone to different forms of inaccuracy or highly coupled to certain touch sensitive technology. This makes them difficult to use in other already available tabletop hardware.

A third approach to provide touch identification consists of the use of vision systems below or above the interactive surface. Zhang et al. [31] developed an approach for discriminating touches on a vision-based tabletop by guessing the finger orientation according to the features of the shape of the touch analysed with a machine learning model. They reported accuracy levels of 97.9% and 94.7% in two studies with ideal conditions, where users were not overlapping their touches nor conducting real gestures. They also reported issues with overhead lighting.

Our work goes beyond previous work by exploring a touch identification system that can be inexpensively added to a wide spectrum of tabletop hardware technologies. It also presents an evaluation suite to validate the accuracy of ours and any other vision-based touch identification system. Both the touch identification system and evaluation suite are freely available for use by designers.3

Similar approaches to the one presented in this paper consist of using an overhead Kinect camera to detect touch interaction and objects over an interactive tabletop. The dSensingNI [12] framework is capable of tracking users fingers and hands to provide advanced multi-touch interactions on the horizontal surface or on tangible objects. Therefore, the focus of this approach is not the user differentiation. A similar posterior approach is Extended multitouch [21] – enabling touch, finger and hand posture detection; and also distinguishing between different users and the hand used. The three previous examples however have yet to be used in real settings.

DESIGN OF OUR TOUCH IDENTIFICATION SYSTEM

Our primary purpose is to provide a simple-to-use touch identification system which can be used with any currently available tabletop system. Our design is based on the following high-level goals: 1. Real-time – The system should be near instantaneous and provide feedback to users being identified. 2. Scalable – The system should handle increasing levels of activity and still be able to identify touches. 3. Support for multiple users – The system should support a minimum of four people working collaboratively.

Lastly, a promising approach which is starting to be explored is capacitive sensing to differentiate users in very small handheld devices [9], which use unique electrical properties of individuals (without the limitations of the DiamondTouch system).

4. Seamless – The system should work with current tabletop technology without requiring the user to wear physical devices or force the user to engage with special icons or tokens on the display. Technology from the viewpoint from the user should be imperceptible. It should also be able to work with different operating systems.

User identification

Common approaches for user identification have explored the use of passwords [11] or biometric solutions to associate touches with a persona. Once a user is registered and recognised, the system provides access to personal data displayed on the horizontal display, can associate objects with the users’ identity, and allows customisation of appearance, content, or functionality of the interface.

5. Low Cost – The system should be inexpensive, such that designers can take advantage of the system on offer. Our system forms part of Collaid [15], a sensing environment that enhances tabletop’s hardware awareness of user’s activity. It uses an off-the-shelf Kinect sensor, mounted above the tabletop. The software was designed to integrate with any tabletop device. It is built using the OpenNI4 Kinect drivers and SDK allowing for cross platform support. It has been tested on a PQLabs5 touch overlay running Windows/Linux, and a Samsung SUR40 (Microsoft Surface 2) running Windows. Our system works by taking a depth image, after the background has been eliminated, can detect any object, person or body above the surface, as shown in Figure 1.

Biometric solutions such as HandsDown [25], aimed to perform user identification by analysing a user’s hand contours. A recent follow up approach is MTi [3]. MTi consisted of an algorithm that, identified users by examining a number of features extracted from the touch points of users’ fingers placed in a certain order on the interactive surface. This project was applicable to all kinds of interactive tabletops irrespective of the technology used to sense contact points. These approaches did offer a clear solution for a multi-user setting and to associate future touches with specific users. Even though they report on usability of the interface, they do not specify accuracy within authentic scenarios.

Our touch identification system is first calibrated to record a baseline of the interactive tabletop and its surroundings without users around it. This only has to be completed once when the Kinect is first mounted. The application can easily be configured for use up to 6 users, support different heights of the depth sensor and even cope with changes in the inclination

From the related work presented, the criteria used to measure accuracy for touch identification has been variable on a number of different factors. Reported results are thus not comparable. Nor are they in a form that enables an application designer to determine which system will serve their needs; it

3

http://chai.it.usyd.edu.au/Projects/DataMiningForTabletop http://www.openni.org/ 5 http://multitouch.com/ 4

375

The following algorithm is performed each time a user touches the interactive surface: 1. A user touches the surface so the regular tabletop application receives the coordinates of the touch and the tabletop application sends the coordinates to our touch identification system. 2. Our touch identification system matches these coordinates with the depth image. The algorithm needs to decide where the users’ body is, so it chooses the closest hand around a touch to be followed (scan radius = r pixels) (Figure 2–2). 3. A simple implementation of a greedy search algorithm chooses the direction of the path that has the maximum body parts detected. The algorithm considers 8 possible radial equidistant directions around the touch and each stepping node (Figure 2–3).

Figure 1. Depth Image from the Sensor.

of the interactive tabletop. The touch identification software and the tabletop application are linked through a simple handshaking network protocol. Each time the application receives a touch event from the hardware it sends the touch identification number and the coordinates to the touch identification system. This resolves the user who performed the touch through an algorithm that goes through the captured depth images from the sensor. The result of this algorithm is sent back to the application. This includes the touch identification number and the position of the user associated with that touch.

4. Steps (2) and (3) are repeated continuously until a minimum distance to a user’s head has been reached or an upper bound of repetitions achieved. Weights are applied to each of the 8 directions so the algorithm does not go back on itself, motivating the search to continue towards a user, instead of going back to the user’s finger. Our touch identification system employs a modular approach, for ease of swapping components and allowing for quick future upgrades. Figure 3 shows the composition of our plug in system, with a tabletop application, a tabletop display, and a depth sensor. Items represented by dashed boxes, indicate items that can be easily interchanged. Additionally, our touch identification system integrates with a number of other systems: a head tracker (a); and a user authentication system (b) (both shown in orange), which can enhance current capabilities of the system, as well as provide new functionality.

Different algorithms can be used to perform the touch identification using the depth information. For this paper we used a weighted greedy best-first-like algorithm. Our emphasis is not the complexity of the algorithm, but its plug-in architecture, with its potential for use with existing systems, and future systems. Figure 2 shows how this algorithm works for an example pinch gesture. The purpose of the algorithm is to trace a line between the touch and the body or head of the user being identified.

The following occurs when the system is used with a tabletop application. The steps refer to those in Figure 3. They are: 1. The touch identification system is calibrated, only once, during install of the sensor device. 2. The depth sensor sends information to the touch identification system. In parallel, the tabletop display sends information to the tabletop application which in turn transmits this information, formatted with a touch identifier (via TCP) to the touch identification system. 3. The user locations are read into the system, allowing the touch identification algorithm, to match a touch location with a user. Our current implementation assumes fixed positions. 4. Results with the user identifier and touch identifier are sent back to the tabletop application. 5. The tabletop application provides feedback to users at the tabletop, such as a uniquely coloured halo corresponding to the user, or an adapted widget.

Figure 2. Diagrammatic Overview of the Algorithm – (1) Pinch gesture, (2) Scan radius of pixels of finger, and (3) Step size of trace measured in pixels.

376

Condition

Designer questions

Number of users

How accurate is the system for 2, 3, 4, 5 and 6 users? Close to the user How accurate when touches are close to the user? Near other users How accurate when two or more user’s touches are close to each other? Arms/hands How accurate if two or more hands or Crossing arms are crossing? Multi-touch How accurate is the system when the user performs multi-touch gestures? Random (Activity How accurate is the system for touches Density) randomly spread across the tabletop? Stand/Sit How accurate is the system if users stand? or if they are seated? Overall What is the overall accuracy for all of the above? Individual variability Does accuracy vary across users? Table 1. Conditions identified as important to test in a touch identification system.

Figure 3. Modular structure of the touch identification system, and how it fits within a tabletop application, and the potential for other components to plug into it.

Figure 4. Examples of conditions designers may wish to know about when evaluating a touch identification system.

should know the effect of group size on accuracy. This aspect interacts with all the others so it is important to test conditions for different numbers of users.

DESIGN OF THE TEST EVALUATION SUITE

In order to test the effectiveness of our identification mechanism, we designed an evaluation suite, with an analysis tool to interpret collected results. This allows us to evaluate our touch identification mechanism across a range of different dimensions, as well as opening the possibility for systematic assessment of touch identification accuracy in other visionbased systems.

Touches close to the user: This means directly in front, within a hands-length from the edge of the tabletop (Figure 4–a). This is important for cases where personal controls are located near a user. It is also the area for most natural interaction by a user, indicated by work on territoriality [26]. Additionally, this region may be prone to occlusion from a users body or head, obstructing line of sight to a sensor (an inherent problem with identification systems that are above the tabletop).

A series of goals drove the design of our evaluation suite: 1. Accuracy tests should be based on the needs of designers. 2. There should be a minimal set of touch identification tests that can serve as a foundation for evaluation of accuracy.

Touches near other users: This is where two or more users’ touches are in close proximity to one another (Figure 4–b). This is important for activities where users are required to work together on the same interface element, such as those found in collaborative gestures [20, 28].

3. Tests should be comparable across touch identification systems. 4. The test process should not be expensive in terms of time.

Arms/hands Crossing: There may be situations, where hands or arms are likely to cross (Figure 4–c). For in-the-wild deployments, this may happen even if not part of the planned design. This case is important as it is likely to be an inherent cause of errors for several mechanisms that rely on simple analysis of hand or arm positions.

A number of different conditions important for designers were identified, which include the sort of questions for which designers would like to have answers to. Table 1 shows a summary of this. The importance of each is now explained: Number of users: The number of users supported by an identification system is important as a key role for tabletops to support small group collaboration. Application designers

Multi-touch: For systems that support multi-touch gestures, a designer may wish to know how these affect accuracy.

377

For example, touches associated with scaling or rotating a widget are commonly done with 2-3 fingers – and these fingers are close to one another for part of the gesture (Figure 4–d). This may cause errors due to occlusion. Random (Activity Density): This is a proxy for authentic activity levels at a tabletop. It is valuable to assess the accuracy for touch points that are distributed randomly across the full surface of the tabletop. We propose that this be done with a variable number of touch points since a designer may create an application where people need to touch multiple points (with users deciding the order in which to do that). Standing/Seated: Designers may have their users standing or seated around a tabletop. When users stand, errors may be caused if their head or body obscures line of sight for above the tabletop methods.

Figure 6. Configuration of a “multi-touch” test case. On the left are options to modify different elements of the test case. Test cases can be configured on a computer, or on the interactive tabletop itself.

Overall: This is total accuracy over tests that cover all the above cases. This is a useful starting point for comparing systems, although cases listed above are more valuable if the designer has a set of intended interaction designs in mind.

• Guides to help in building test cases for each condition. • Tagging of test cases to match a condition.

• A generator to build test cases for the random condition (activity density), allowing number of touches, users, and minimum distance between touches to be varied

Individual Variability: This is whether accuracy values vary across users. For example, errors may come from one participant in testing. It may also be due to classes of errors that are more likely to occur if a user is at a certain location around a tabletop.

When an evaluation was conducted, a group of users sat (or stood) in front of their colour–marked by a rectangle on the tabletop. The system gave a prompt that tests were about to start. There was a preview window before the system recorded user input. After all points in a test case were touched, the next test case began.

USER VIEW

Our test cases ran on a 46-inch (rectangular) PQLabs multitouch display,6 with a 1080p resolution, with the Kinect mounted to the ceiling directly above the tabletop. Figure 5 shows our placement of users around the tabletop.

Our evaluation system ran test cases for users standing and seated. Both of these physical conditions were repeated twice. This was to validate the feasibility of our vision system associated with the possible impact of occlusion related to posture. For each physical condition, two test cases for: touches close to the user, touches near other users, arms/hands crossing, multi-touch were performed and ten for the random condition corresponding to 1 to 10 touch points per user (activity density). In total this was 360 tests. Including setup time, an evaluation trial of 360 tests, took less than 75 minutes. While configuration of the tests evaluated aspects of our touch identification system, the system is open to accommodate other needs – by choosing from a set of provided test cases, or using our graphical configuration tool to create additional test cases.

Figure 5. Placement of users around the tabletop.

For each test case, there was a 3 second preview window before participants could touch their designated touch points. The tabletop in the preview window took on a light grey background (Figure 7a). This gave participants time to see what they needed to touch. After the preview window, the system became active. This was indicated by the background changing to black (Figure 7b). The touches from users were recorded to the database only when a test was active.

We used a graphical tool (Figure 6) to create each test case. The tool produces a file describing the test case, allowing the test case to be easily shared with other designers. The test case file serves as input to the evaluation system. The tool allows the following to be specified by the designer: • Number of users.

• Number of touches for a user (with interactive editing).

When a point on the tabletop was touched, the colour of the point changed from its original colour (Figure 8a) to a default colour of grey (Figure 8b) to signify the point as having been touched. Subsequent touches to the same point were

• Maximum reach distance from a user (controlled via a slider). 6

http://www.multitouch.com

378

Users

Total touches

Num. wrong

Accuracy

2 3 4 5 6

32 48 64 80 96

1 9 4 13 16

97% 81% 94% 84% 83%

Table 3. Results of condition: Touches near other users.

Figure 7. System Phases – Preview and Active.

Users

Total touches

Num. wrong

Accuracy

2 3 4 5 6

32 48 64 80 96

8 4 12 10 16

75% 92% 81% 88% 83%

Table 4. Results of condition: Arms/hands crossing.

ignored. When all touches were completed for the current test, the system automatically advanced to the next test. When all tests were complete, the system prompted participants to move their seats, then stand, or if at the end of the standing phase, asked the number of users in the group to change, before starting a new series of tests.

Users

Total touches

Num. wrong

Accuracy

2 3 4 5 6

96 144 192 240 285

0 0 0 3 9

100% 100% 100% 99% 97%

Table 5. Results of condition: Multi-touch gestures.

Figure 8. Before and after a touch point is touched.

RESULTS

We report on the evaluation of our plug-in Kinect based system. Our system was evaluated with a group size of 2 through 6 users. There were 2 evaluation trials, for a total of 12 different users (mean age 25, median age 26, 83% male, 17% female). As per the specifications for each trial, users completed test cases both standing and seated, repeated twice. In total, 720 test cases were performed, with 10702 touches evaluated.

Users

Touches per user

Total touches

Num. wrong

Accuracy

2

1 5 10

16 80 160

1 2 4

94% 98% 98%

3

1 5 10

24 120 240

2 10 16

92% 92% 93%

4

1 5 10

32 160 320

2 15 34

94% 91% 89%

5

1 5 10

40 200 394

0 25 49

100% 88% 88%

6

1 5 10

48 240 476

11 35 82

77% 85% 83%

Table 6. Results of condition: Random touch points (activity density) on the tabletop.

Results are reported across the different conditions: touches close to the user (Table 2); touches near other users (Table 3); arms/hands crossing (Table 4); multi-touch gestures (Table 5); random touch points (activity density) (Table 6 – for brevity, a subset of results are shown); users standing/seated (Table 7); overall results – per group of users (Table 8); and individual variability (Table 9).

Users:

Standing: Seated:

2

3

4

5

6

96% 97%

94% 94%

91% 94%

89% 92%

85% 88%

Table 7. Accuracy results of condition: users standing and seated.

Users

Total touches

Num. wrong

Accuracy

Users

Total touches

Num. wrong

Accuracy

2 3 4 5 6

32 48 64 80 96

0 0 0 2 2

100% 100% 100% 98% 98%

2 3 4 5 6

1072 1608 2144 2672 3206

35 100 163 257 435

97% 94% 92% 90% 86%

Table 8. Results: overall accuracy per number of users.

Table 2. Results of condition: Touches close to the user.

379

User

Total touches

Num. wrong

Accuracy

1 2 3 4 5 6

2668 2677 2142 1608 1072 535

261 225 163 159 121 61

90% 92% 92% 90% 89% 89%

For multi-touch gestures, looking at the issue of possible occlusion due to multiple touches in close proximity to one another, and performed by the same hand – accuracy was 100% for 2-4 users and 97% or higher for 5-6 users. For the random touch position (activity density) test cases, result accuracy decreased with the number of users, with accuracies of 92% or higher for 2-4 users, and 77% or higher for 5-6 users. Even in adverse unplanned conditions, the system still achieved a high level of accuracy.

Table 9. Results: Individual users.

The results presented in the previous tables are only a subset of the results available to designers and users. After each evaluation, the system automatically produces a detailed report, as shown in Figure 9. This report displays the test case, the different conditions, is searchable, and when the name of a test case is selected, reveals that particular test case to the designer/user.

When posture is taken into account, there was a marginal accuracy increase of 3% for the seated condition over the standing condition for 4 or more users. This was expected, because when an individual stands, they can reach any part of the tabletop, by extending their arm, or more likely, leaning with their body, a side effect – obstructing the line of sight to the depth sensor above the tabletop. Results for each individual user showed minimal variability (a maximum difference of 3%) for different user locations around the tabletop – this showed our system was not prone to ‘hotspots’ or particularly troublesome areas.

Figure 9. A report generated after an evaluation trial has been conducted.

Figure 10. Example of a potential obstruction, with the users on the right and their arms crossing.

DISCUSSION

Possible sources of error

Our system performed well for touches close to the user, 100% for 2-4 users and 98% for 5-6 users. Our plug-in system was well suited when touches were in close proximity to a user, which is often found in widgets such as personal control panels.

Important for designers building touch identification systems is understanding potential sources of error. We report on two such possible sources: 1. A Closer User: The percentage of errors where user X performed the touch but another user Y was closer to the touch point.

For touches near other users, accuracy declined with the number of users present. For 2 users – 97% and for 6 users – 83%. Knowing this degradation, tabletop application developers can take steps to influence scenarios which promote higher levels of accuracy, such as locating users at opposite ends of a table.

2. Nearby Touches: The percentage where the misclassified touch point of user X was in close proximity to another touch point (either user X or another user) – within a specified distance (size of a hand), within an interval of 2 seconds around when the touch was misclassified.

For arms/hands crossing, the results ranged from 75% to 92%, with an average of 84%. This was partially expected, given the nature of our algorithm – however it is noted that these test cases forced users to occlude each other, such as in Figure 10.

The percentage of each, as a proportion of the number of wrong touches is shown in Table 10. The potential sources are not mutually exclusive of one another. These possible sources of error help establish a true rate of error within the

380

Users

Closer User – % of errors where the wrong user was closer to the touch than the expected user

Nearby Touches – % of errors where other touches were nearby

2 3 4 5 6

9% 50% 42% 42% 37%

9% 36% 28% 40% 34%

different systems can be compared for a range of interactions at a tabletop. The evaluation suite examined scenarios applicable for everyday tabletop use and did not make assumptions about which interactions are common or not. This is different to the current body of work, which has reported high measures of accuracy, at the expense of testing none of the inconvenient cases – such as hand, arm or finger occlusion which are real possibilities in tabletop applications, or touches performed using different parts of the hand.

Table 10. Potential sources of error.

system, after situations commonly known to promote errors are accounted for.

Additional to the extensive testing of our touch identification system on the PQLabs touch display, the system has successfully been tested in a custom made FTIR-based tabletop and a Samsung SUR40. It has been widely used with scenarios in the wild for conducting empirical research [15, 17, 18]. In a parallel study we have explored a technique to add user identification by pairing users with their personal devices placed on the tabletop. However, other techniques can be also applied such as the password or biometrics-based approaches presented in the related work section.

CONCLUSIONS AND FUTURE WORK

The work presented in this paper is motivated by the benefits of touch identification to enhance collaboration awareness, adaptation and personalisation, at readily available interactive tabletops. A primary goal of this paper is to present a touch identification system and an evaluation suite. The latter can serve to understand strengths, weaknesses and the accuracy of a touch identification solution in real world use.

Overall, this work moves towards providing a foundation for the better understanding of touch identification systems for tabletops, in the hope of providing designers with the tools, to enable richer interfaces and applications. Work in progress for this project includes the extension of the system to track users moving around the tabletop.

We presented a plug-in system that can be attached to preexisting interactive tabletop hardware to enhance such with affordances for user differentiation (touch identification). We also presented a suite that can help designers and implementers evaluate our system or any other vision-based touch identification tabletop system. This evaluation suite can identify classes of errors that an application designer should be aware of when they use a touch identification system.

ACKNOWLEDGEMENTS

This work is partially funded by the Smart Services CRC.

We envisage a strong future for our system and evaluation suite, as a basis for evaluating other touch identification systems, and helping build profiles for comparison to allow designers to make more informed choices. This motivates the release of our software tools: the touch identification system and the evaluation suite, with its graphical configuration and analysis tools, to the community.

REFERENCES

1. Annett, M., Grossman, T., Wigdor, D., and Fitzmaurice, G. Medusa: a proximity-aware multi-touch tabletop. In Proc. of UIST ’11, ACM (New York, NY, USA, 2011), 337–346. 2. B´erard, F., and Laurillau, Y. Single user multitouch on the diamondtouch: from 2 x 1d to 2d. In Proc. of ITS’09, ACM (2009), 1–8.

We reported the evaluation of our vision-based system based on the design of an evaluation suite focused on understanding and validating accuracy across a set of conditions that can challenge these kinds of systems. These conditions broadly include scalability (number of users working simultaneously) and density of touches (amount of touches occurring in parallel). More specifically, the evaluation also explored a defined set of particular gestures that can lead a system to misjudge user differentiation, such as contiguous touches, postures that create occlusion of body parts for an overhead sensor, collaborative gestures and interaction of users on areas that are close to other users.

3. Blaica, B., Vladuic, D., and Mladenic, D. Mti: A method for user identification for multitouch displays. International Journal of Human-Computer Studies 71, 6 (2013), 691 – 702. 4. Brandl, P., Forlines, C., Wigdor, D., Haller, M., and Shen, C. Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces. In Proceedings of the working conference on Advanced visual interfaces, AVI ’08, ACM (New York, NY, USA, 2008), 154–161. 5. Collins, A., and Kay, J. Collaborative Personal Information Management with Shared, Interactive Tabletops. In Proc. of Personal Information Management 2008 (a CHI 2008 Workshop), Citeseer (2010).

Our study opens up a number of possibilities for future exploration. If designers can be aware of the accuracy of the touch identification method they use in their interactive tabletops, then there may be ways to fix issues, design around them, or just use the device knowing the limitations. The evaluation suite helps pinpoint sources of error in current touch identification solutions (especially relevant for vision-based approaches) and provides a systematic method upon which

6. Dietz, P., and Leigh, D. DiamondTouch: a multi-user touch technology. In Proc. of UIST ’01, ACM Press (New York, New York, USA, 2001), 219–226.

381

19. Meyer, T., and Schmidt, D. IdWristbands: IR-based User Identification on Multi-touch surfaces. In Proc. of ITS ’10, ACM (2010), 277–278.

7. Fleck, R., Rogers, Y., Yuill, N., Marshall, P., Carr, A., Rick, J., and Bonnett, V. Actions speak loudly with words: unpacking collaboration around the table. In Proc. of ITS ’09, ACM (New York, NY, USA, 2009), 189–196.

20. Morris, M., Cassanego, A., Paepcke, A., Winograd, T., Piper, A., and Huang, A. Mediating Group Dynamics through Tabletop Interface Design. IEEE Computer Graphics and Applications 26, 5 (Sept. 2006), 65–73.

8. Harris, A., Rick, J., Bonnett, V., Yuill, N., Fleck, R., Marshall, P., and Rogers, Y. Around the table: Are multiple-touch surfaces better than single-touch for children’s collaborative interactions? In Proc. of CSCL ’09, International Society of the Learning Sciences (Rhodes, Greece, 2009), 335–344.

21. Murugappan, S., Vinayak, Niklas Elmqvist, N., and Karthik Ramani, K. Extended Multitouch: Recovering Touch Posture and Differentiating Users using a Depth Camera. In Proc. of UIST ’12, ACM (2012), to appear.

9. Harrison, C., Sato, M., and Poupyrev, I. Capacitive fingerprinting: exploring user differentiation by sensing electrical properties of the human body. In Proceedings of the 25th annual ACM symposium on User interface software and technology, UIST ’12, ACM (New York, NY, USA, 2012), 537–544.

22. Mller-Tomfelde, C., and Fjeld, M. Introduction: A short history of tabletop research, technologies, and products. 1–24. 23. Rick, J., Harris, A., Marshall, P., Fleck, R., Yuill, N., and Rogers, Y. Children designing together on a multi-touch tabletop: an analysis of spatial orientation and user interactions. In Proc. of IDC ’09, ACM (2009), 106–114.

10. Kharrufa, A., Leat, D., and Olivier, P. Digital mysteries: designing for learning at the tabletop. In Proc. of ITS ’10, ACM, ACM Press (New York, New York, USA, 2010), 197. 11. Kim, D., Dunphy, P., Briggs, P., Hook, J., Nicholson, J., Nicholson, J., and Olivier, P. Multi-touch authentication on tabletops. In Proc. of CHI ’10, ACM, ACM Press (New York, New York, USA, 2010), 1093.

24. Ryall, K., Esenther, A., Everitt, K., Forlines, C., Morris, M., Shen, C., Shipman, S., and Vernier, F. iDwidgets: Parameterizing Widgets by User Identity. Human-Computer Interaction-INTERACT 2005 (2005), 1124–1128.

12. Klompmaker, F., Nebe, K., and Fast, A. dSensingNI: a framework for advanced tangible interaction using a depth camera. In Proc. of TEI ’12, ACM (New York, NY, USA, 2012), 217–224.

25. Schmidt, D., Chong, M. K., and Gellersen, H. HandsDown: hand-contour-based user identification for interactive surfaces. In Proc. of NordiCHI ’10, ACM (New York, NY, USA, 2010), 432–441.

13. Marquardt, N., Kiemer, J., and Greenberg, S. What caused that touch?: expressive interaction with a surface through fiduciary-tagged gloves. In Proc. of ITS ’10, ACM, ACM (New York, NY, USA, 2010), 139–142.

26. Scott, S. D., Sheelagh, M., Carpendale, T., and Inkpen, K. M. Territoriality in collaborative tabletop workspaces. In Proc. of CSCW ’04, ACM, ACM Press (New York, New York, USA, 2004), 294.

14. Mart´ın, E., and Haya, P. A. Towards Adapting Group Activities in Multitouch Tabletops. In Adj. Proc. of UMAP ’10, Citeseer (2010), 28–30.

27. Stewart, J., Bederson, B. B., and Druin, A. Single display groupware: a model for co-present collaboration. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, CHI ’99, ACM (New York, NY, USA, 1999), 286–293.

15. Martinez, R., Collins, A., Kay, J., and Yacef, K. Who did what? who said that? Collaid: an environment for capturing traces of collaborative learning at the tabletop. In ACM International Conference on Interactive Tabletops and Surfaces, ITS 2011 (2011), 172–181.

28. Stock, O., Zancanaro, M., Koren, C., Rocchi, C., Eisikovits, Z., Goren-bar, D., Tomasini, D., and Weiss, P. T. A co-located interface for narration to support reconciliation in a conflict: initial results from Jewish and Palestinian youth. In Proc. of CHI ’08, ACM Press (New York, New York, USA, 2008), 1583.

16. Martinez, R., Kay, J., and Yacef, K. Visualisations for longitudinal participation, contribution and progress of a collaborative task at the tabletop. In Proc. of CSCL ’11, International Society of the Learning Sciences (ISLS) (2011), 25–32.

29. Streitz, N., and Nixon, P. The disappearing computer. Communications of the ACM 48, 3 (2005), 32–35.

17. Martinez-Maldonado, R., Yacef, K., and Kay, J. An automatic approach for mining patterns of collaboration around an interactive tabletop. In International Conference on Artificial Intelligence in Education, AIED 2013 (2013), 101–110.

30. Tang, A., Pahud, M., Carpendale, S., and Buxton, B. VisTACO: visualizing tabletop collaboration. In Proc of ITS ’10, ACM (2010), 29–38. 31. Zhang, H., Yang, X.-D. D., Ens, B., Liang, H.-N. N., Boulanger, P., and Irani, P. See me, see you: a lightweight method for discriminating user touches on tabletop displays. In Proc. of CHI ’12, ACM, ACM (New York, NY, USA, 2012), 2327–2336.

18. Martinez-Maldonado, R., Yacef, K., and Kay, J. Data mining in the classroom: Discovering groups strategies at a multi-tabletop environment. In International Conference on Educational Data mining, EDM 2013 (2013), 121–128.

382

An approach for designing and evaluating a plug ... - ACM Digital Library

sign of an evaluation suite to inform application designers of the effectiveness of the system to differentiate users. We il- lustrate its use by evaluating the solution ...

1MB Sizes 1 Downloads 202 Views

Recommend Documents

Designing Unbiased Surveys for HCI Research - ACM Digital Library
May 1, 2014 - enable the creation of unbiased surveys, this course ... Permission to make digital or hard copies of part or all of this work for personal or ...

practice - ACM Digital Library
This article provides an overview of how XSS vulnerabilities arise and why it is so difficult to avoid them in real-world Web application software development.

Computing: An Emerging Profession? - ACM Digital Library
developments (e.g., the internet, mobile computing, and cloud computing) have led to further increases. The US Bureau of Labor Statistics estimates 2012 US.

Incorporating heterogeneous information for ... - ACM Digital Library
Aug 16, 2012 - A social tagging system contains heterogeneous in- formation like users' tagging behaviors, social networks, tag semantics and item profiles.

An interactive multi-touch sketching interface for ... - ACM Digital Library
Diffusion curves are effective 2D vector-graphics primitives, for creating smoothly-shaded drawings with rich colors and unique styles. Conventional drawing ...

A Framework for Technology Design for ... - ACM Digital Library
Internet in such markets. Today, Internet software can ... desired contexts? Connectivity. While the Internet is on the rise in the Global South, it is still slow, unreliable, and often. (https://developers.google.com/ billions/). By having the devel

Borg, Omega, and Kubernetes - ACM Digital Library
acmqueue | january-february 2016 71 system evolution. As more and more applications were developed to run on top of Borg, our application and infrastructure ...

6LoWPAN Architecture - ACM Digital Library
ABSTRACT. 6LoWPAN is a protocol definition to enable IPv6 packets to be carried on top of low power wireless networks, specifically IEEE. 802.15.4.

A Framework for Technology Design for ... - ACM Digital Library
learning, from the technological to the sociocultural, we ensured that ... lives, and bring a spark of joy. While the fields of ICTD and ..... 2015; http://www.gsma.com/ mobilefordevelopment/wp-content/ uploads/2016/02/Connected-Women-. Gender-Gap.pd

Kinetic tiles - ACM Digital Library
May 7, 2011 - We propose and demonstrate Kinetic Tiles, modular construction units for kinetic animations. Three different design methods are explored and evaluated for kinetic animation with the Kinetic Tiles using preset movements, design via anima

Towards a Relation Extraction Framework for ... - ACM Digital Library
to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi- supervised Natural Language Processing and ...

An Experimental Time-Sharing System - ACM Digital Library
It is the purpose of this paper to discuss briefly the need for time-sharing, some of the implementation problems, an experimental time- sharing system which has ...

Proceedings Template - WORD - ACM Digital Library
knowledge-level approach (similarly to many AI approaches developed in the ..... 1 ArchE web: http://www.sei.cmu.edu/architecture/arche.html by ArchE when ...

The Chronicles of Narnia - ACM Digital Library
For almost 2 decades Rhythm and Hues Studios has been using its proprietary software pipeline to create photo real characters for films and commercials. However, the demands of "The Chronicles of. Narnia" forced a fundamental reevaluation of the stud

BlueJ Visual Debugger for Learning the ... - ACM Digital Library
Science Education—computer science education, information systems education. General Terms: Experimentation, Human Factors. Additional Key Words and ...

A guided tour of data-center networking - ACM Digital Library
Jun 2, 2012 - purpose-built custom system architec- tures. This is evident from the growth of Ethernet as a cluster interconnect on the Top500 list of most ...

On Effective Presentation of Graph Patterns: A ... - ACM Digital Library
Oct 30, 2008 - to mine frequent patterns over graph data, with the large spectrum covering many variants of the problem. However, the real bottleneck for ...

Privacy-preserving query log mining for business ... - ACM Digital Library
transfer this problem into the field of privacy-preserving data mining. We characterize the possible adversaries interested in disclosing Web site confidential data and the attack strategies that they could use. These attacks are based on different v

Evolutionary Learning of Syntax Patterns for ... - ACM Digital Library
Jul 15, 2015 - ABSTRACT. There is an increasing interest in the development of tech- niques for automatic relation extraction from unstructured text. The biomedical domain, in particular, is a sector that may greatly benefit from those techniques due

Who knows?: searching for expertise on the ... - ACM Digital Library
ple had to do to find the answer to a question before the Web. Imagine it is. 1990, before the age of search engines, and of course, Wikipedia. You have.

Online Microsurveys for User Experience Research - ACM Digital Library
May 1, 2014 - experience research. We focus specifically on Google. Consumer Surveys (GCS) and analyze a combination of log data and GCSs run by the ...