Collaborative User Interfaces Seminar on Post-Desktop User Interfaces

Seminar paper at the Media Computing Group Prof. Dr. Jan Borchers Computer Science Department RWTH Aachen University

Nitesh Goyal Torsten Palm Advisors: Malte Weiß Max Möllers Semester: Winter Semester 2008 Submission date: Nov 20th, 2008

ii

iii

Contents

1

2

3

Introduction

1

1.1

Issues in Collaboration . . . . . . . . . . . . . . . . . . . .

1

1.2

Consequences for Interface Design . . . . . . . . . . . . .

3

1.3

Milestones . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

Current Work

7

2.1

Linking Single-User Editors for Collaboration . . . . . . .

7

2.2

Auto-Individualization of Interface Widgets . . . . . . . .

9

2.3

Coordination Policies Beyond Social Protocols . . . . . .

12

2.4

Video Manipulation in a Collaborative Task . . . . . . . .

13

2.5

Handheld Projectors . . . . . . . . . . . . . . . . . . . . .

15

2.6

Enforcing Collaboration - Sharing the Squid . . . . . . . .

17

2.7

Reviewing Mechanisms . . . . . . . . . . . . . . . . . . .

19

Future Work

21

Bibliography

23

v

List of Figures

1.1

The time-place-matrix and some entries . . . . . . . . . .

2

1.2

Two people collaborating on a tabletop (Ryall et al. [2006])

5

2.1

CoWord’s system architecture (Sun et al. [2006]) . . . . .

8

2.2

The widget on the left is customized by its function, whereas the widget on the right is customized by its appearance (Ryall et al. [2006]) . . . . . . . . . . . . . . . . .

10

2.3

The setup of the studies (Ranjan et al. [2006]) . . . . . . .

14

2.4

Collaborators build up a context display with a focus in it (Cao et al. [2007]) . . . . . . . . . . . . . . . . . . . . . .

16

2.5

Here we can see the squid in action (Stern et al. [2008]) .

18

2.6

A variant of review visualizations: Ongoing participation level (DiMicco et al. [2006]) . . . . . . . . . . . . . . .

20

vii

List of Tables

2.1

Policies’ design space and examples . . . . . . . . . . . .

13

1

Chapter 1

Introduction Nowadays, in many areas of daily life, people can only be successful when they collaborate. Computer systems may support this collaboration, where an interface links the system to the humans. Since interactions in a collaborative system are very complex, collaborative interfaces become more complex than single-user interfaces, and additional issues have to be considered in the design process. In the next sections, we will first focus on what collaboration is. Then we will present different issues that are involved in collaborative acts and discuss their influence on collaborative interfaces and their design. Finally, we will show some milestones of collaborative user interfaces.

1.1

Issues in Collaboration

People naturally have physical abilities, knowledge, or time resources that are limited and usually individually different. Thus, humans tend to group and to merge the individual capabilities into a powerful unit. When a common goal is tackled by a group, the act is called collaboration. Ellis et al. [1991] define the time-place-matrix to categorize collaborative scenarios, which can either happen at the same time or at different times and can either be located at the same place or at different places.

Grouping

To gain successful collaboration, certain prerequisites should be fulfilled. First of all, the group members should know the composition

Awareness

2

1

Introduction

Figure 1.1: The time-place-matrix and some entries

of the group. This includes awareness of the group members, their individual capabilities, and the common goal, its current state and its current progress. This knowledge helps group members to decide furCommunication ther actions. Obviously, a continuos information exchange between the group members is essential to keep knowledge up-to-date. As technology evolves, communication is simplified and spatial distances can be overcome in video conferences or chats. Besides easing communication, another technical contribution is of great importance. In former Resource times, resources like documents could only be at one place at a time. Sharing On the contrary, nowadays documents can be shared via network and access is ensured from anywhere immediately. But this flexibility is accompanied with problems. Simultaneous access to one resource has to be coordinated, for example.

CSCW

Since technology provides groups with many comfortable mechanisms that support collaboration, the computer supported collaborative work (CSCW) has become a great research area in which computer science, psychology, and sociology meet each other. One of the contributions of computer science is the research in designing collaborative interfaces, which we will focus on in the next section.

1.2

1.2

Consequences for Interface Design

3

Consequences for Interface Design

A computer supported collaborative system consists of a central computer system and different users, that are linked by interfaces. Norman [2002] describes human interaction in the seven stages of action as follows. A user has a certain view on a system and a goal to change the system to his advantage. He chooses an action sequence to achieve the goal, performs this sequence and receives feedback from the system. Finally, he updates his view on the system, compares changes to his goal, and formulates a new goal. Based on this model, Norman gives advices for interface design to reduce errors in the action cycle. These advices still hold for collaborative interfaces which involve multiple users. But a couple of additional issues have to be considered in the collaborative case, because the interaction model becomes more complicated. Other users and their minds can be interpreted as a part of the system. So, it is desirable for a user to integrate them into his system model. To do this, each user must be able to get the needed information via the interface. This needs better communication. In case group members are co-located, they can talk to each other, gesticulate, or observe each other and each other’s actions. In case group members are located at different places, communication becomes more complicated. Email exchange may be a solution to the problem. But it is inappropriate for time-critical situations. In this case, a chat or a video conference is more suitable or it may be sufficient to represent the mouse cursors of each user to document the current actions and plans. Ellis et al. [1991] and Antunes and Guimaraes [1994] give overviews on some aspects that have to be considered. The interface designer has to decide, whether he provides a single central interface to each user or one dedicated interface for each user. In the latter case, the question arises, whether each user is provided with the same interface or the interfaces are individualized per user. Regarding multi-user actions, a way to resolve conflicts has to be found. This may be realized by strict turn-taking, where only one user is allowed to act to a certain time, for example. Another problem that arises, in case different interfaces are distributed, is related to the performance issues. In case that a user performs a certain action like scrolling or stroking a key, his local interface should be as reactive as in a single-user interface and the action’s ef-

Visibility

TurnTaking

1

4 Reactivity

Introduction

fect should be shown immediately. This can be achieved by copying system data locally to each user. Changes to the shared data must be propagated to all other users, and to the central data from time to time, to keep all the copies consistent. The frequency of these updates is a design decision that also influences performance. In case that an update is performed after each keystroke, the network traffic may explode. On the contrary, too small update intervals cause great discrepancies between the different copies. The issues discussed so far have to be considered to design a collaborative interface. Lots of approaches have been devised to design collaborative interfaces, that have different solutions to problems. In the next section we will have a brief look at some milestones in collaborative interface design.

1.3

Milestones

In 1968, Douglas Engelbart presented the “oN Line System” NLS to support humans in leveraging their work and augmenting their intelligence (Engelbart and English [1968]). NLS connected six desktop computers around a central table via network. One single user was in control of the system at a time, while the other users could move their mouse pointers to point at positions on the screen without affecting the system. According to “What you see is what I see” (WYSIWIS) each monitor strictly showed the same picture, which reminds us of a central chalkboard. Finally, the system could easily document the conference’s course. From this starting point, researchers at the Stanford Research Laboratory extended NLS by a couple of new functions related to collaboration. In 1978, NLS was commercialized and reAUGMENT, named AUGMENT . In the middle of the 80s, Engelbart [February 20-22 1978 1984] summarizes AUGMENT’s contributions to collaboration, which include email exchange, file sharing, and the creation of a central journal database. Since these functionalities are still up to date, Engelbart can be seen as a pioneer of collaborative support. Another conference CoLab, system, named CoLab , was developed in the middle of the 80s at XE1987 ROX (Stefik et al. [1987]). The strict assumptions made in NLS were relaxed to make CoLab more flexible and to support modern needs in conferencing. For example, explicit turn-taking has to be relaxed to avoid limited performance in a brainstorming process. Moreover, the WYSIWIS paradigm is relaxed as well. On the one hand, each user is interested in the central workspace to have common ground. But on NLS,

1968

1.3

Milestones

5

Figure 1.2: Two people collaborating on a tabletop (Ryall et al. [2006]) the other hand, each user also needs the possibility to execute mouse movements without propagating them to other users, to make private notes, or to switch between public and private workspace in case of “silent” work. While NLS/AUGMENT and CoLab are laboratory approaches to collaborative systems and interfaces, commercial collaborative tools have evolved, which cannot all be enumerated here. But most of them take up ideas and techniques realized in Augment or CoLab. Apart from these classical desktop scenarios, research in common interface design has produced other interface ideas. Bolt [1980] proposes a multimodal interaction metaphor called “PUT-THAT-THERE”, by which objects on the screen are manipulated via mixed speech and gesture inputs. A verbal manipulation command (“put”) creates or moves an object. It is followed by a pointing gesture or another verbal input to specify parameters like the object to be manipulated or the target position. Collaboration is possible, because the initial command parameters can be given by different persons. In the last decade, research in human computer interaction has opened up new interface ideas, that make interaction more natural. As screens became larger and the technique of touchscreens evolved, the idea to orient large touchscreens horizontally was born. These screens are called tabletops. Since a table is considered to be the centre for shared activities, the table metaphor is very natural. Additionally, a touchscreen provides direct manipulation.

PUTTHATTHERE, 1980

Tabletops, recent

7

Chapter 2

Current Work This chapter is dedicated to current research in the area of collaborative user interfaces. We will present new approaches to collaborative interfaces and the different ways that these approaches use to deal with the design issues discussed in 1.2.

2.1

Linking Single-User Editors for Collaboration

In this section, we will present an idea that extends existing single-user editors by mechanisms that allow multiple instances to be linked to a collaborative system. The idea is inspired by different goals. A user is used to his single-user application and its interface. So it makes sense to enable collaborative work in the familiar framework. The resulting system supports concurrent work in real-time, which is a feature missing in many collaborative supporting tools like MS NetMeeting. This includes parallel data access and relaxed WYSIWIS. The approach aims at building up the system on top of the application’s API, instead of modifying its source, since users are used to “off-the-shelf’-applications that do not provide source code. Finally, the transformation is intended to be generic and reusable to other single-user applications. In their work, Xia et al. [2004] apply this idea to the text processing application MS Word, which results in the collaborative system CoWord. Starting point of the implementations is a technique called Operational Transformation (OT), which was introduced by Sun and Ellis [1998]. OT expects the manipulated data to be represented in memory as a se-

Motivation

2

8

Current Work

Figure 2.1: CoWord’s system architecture (Sun et al. [2006]) quence. In the classical OT, this sequence can be manipulated via commands insert and delete. OT was extended by another command update, discussed by Sun et al. [2004]. These commands are called Primitive Operations (PO). Example

OT’s functionality is best shown by an example. We assume a string “abc00 at two remote sites R1 and R2 and two concurrent operations O1 = Insert[0, ”x”] created at R1 and O2 = Delete[3, “c”] created at R2. Each command is locally executed and transmitted to the remote site. After local execution, we have “xabc00 at R1 and c is at position 4. O2 cannot be executed and has to be reinterpreted to O20 = Delete[4, “c”]. OT takes care of this translation to make sure, that remotely received commands are correctly executed, although changes have happened meanwhile. At R2 no reinterpretation is needed, and we finally have the string “xab” on both sides. Assuming that the application provides an API to access data and operations, the task is to adapt them to the OT requirements. This is done in an intermediate translation, that results in a collection of Adapted Operations (AO). CoWord’s system architecture is visualized in figure 2.1.

Implementation The

functionality of the system is as follows. A user X performs a local command AP I(X). This command is immediately performed on

2.2

Auto-Individualization of Interface Widgets

9

the local copy and sent to CA. CA translates AP I(X) to AO(X) and to P O(X). P O(X) is stored in OT’s history buffer and time-stamped. AO(X) is remotely transmitted and time-stamped accordingly. In case a command AO(Y ) is received from a remote machine Y, the following happens: AO(Y ) arrives at CA, is translated to P O(Y ), and sent to OT. OT reinterprets P O(Y ) with respect to the history queue to P O0 (Y ). P O0 (Y ) is added to OT’s history queue and sent to CA. P O0 (Y ) helps to reinterpret AO(Y ) to AO0 (Y ), which is translated to AP I 0 (Y ) which is finally executed in the application. The research group checked the adaptability of the approach to extend MS PowerPoint to CoPowerPoint (Sun et al. [2006]). In this process OT’s support for a linear data structure was not sufficient. So OT was extended to allow tree-like data structures and as consequence the GCE was re-implemented. The authors state, that the new framework now allows a wider range of applications to be extended, that this modification had only to be done once, and is not part of the usual adaption process. This just includes modifications in parts of the CA to adapt it to the specific SA. According to the authors, the generic nature of the CoWord approach was confirmed. The approach also shows, that a strict distinction between collaborative-transparent systems, where multiple users work together in a single-user application and collaborative-aware systems, that are specifically designed for collaboration, is not necessary. Both categories can be merged in one application. The authors expect CoWord and CoPowerPoint to build a base for future research in the area of extended single-user interfaces. One goal is to formulate standards for an application’s API to simplify the application’s adaptability. Further, extensions to support flexible notification, fine-grained locking, externalities, and heterogeneity shall be found.

2.2

Adaptability

Future Work

Auto-Individualization of Interface Widgets

Interface creation is often supported by toolkits, that provide a set of widgets which reduce need for implementation. As soon as a widget is utilized at runtime, a method is automatically executed, that may be defined by the interface designer beforehand. When we think of multiple users in control of a shared interface in parallel, it may not be sufficient to detect that a widget was utilized. But it is also important to know which user has initiated the action. A certain button may

Motivation

2

10

Current Work

Figure 2.2: The widget on the left is customized by its function, whereas the widget on the right is customized by its appearance (Ryall et al. [2006]) not be clicked by each user, for example. As a consequence, widgets may be customized per user and do not necessarily have to be duplicated for each user, which reduces the clutter, memory requirements, and processor cycles. Additionally, the implementation of restrictions, permission handling, and preference optimization is simplified. One of the approaches to this idea is to use iDwidgets, which are introduced by Ryall et al. [2006]. A user’s identity is detected using capacitive coupling via the screen, where a conductive pad placed on the seat completes the circuit, when the user touches the screen. Other possibilities may include Biometrics, face recognition, and RFID etc. This allows to individualize widgets per user. A widget can be customized in different ways by customizing its look and feel. The look of a widget depends on the appearance and the content, while its feel includes function and group input.

Implementation

• Customizing Function: The functionality of a widget depends on the utilizing user. For example, a single undo button may only undo actions of the user who clicked the button. Privileged access to widgets can also be provided to prevent non-authorized access to documents or system functionalities. Also user inputs can be semantically interpreted differently. For example, while searching for a term like “pictures of dad”, only user-related results are displayed. The widgets can have a differentiated behavior to cater to specific needs like keeping pace with children of different mental level, all collaborating on the same surface.

2.2

Auto-Individualization of Interface Widgets

11

• Customizing Content: Customizing the content is related to lists of bookmarks or the history of previously accessed pages and documents. Even in case just one bookmark list widget is implemented, this may adapt its content according to the user requesting the list. Additionally, customized menus capable of activating or deactivating certain possibilities or reordering of the list of menu items by last access can also be implemented according to the specific user in question. • Customizing Appearance: Users like to customize the appearance of their interfaces to their needs, in order to reflect a consistency between the collaborative interfaces and their regular single user desktop interface. A widget’s font and color can be individualized, which make up the properties and aesthetics. Also switching the widget’s orientation towards the user that touches the widget for improved usability might be of particular interest. • Customizing Group Input: Multiple widgets can be combined to a complex widget which is utilized by a group of users to provide a better collaborative experience between them. An application might be the cumulative effect in voting amongst the users to accept an idea. Simultaneous inputs by different users might be considered a possibility to check and authorize tasks that might require a certain level of authorization. The users’ modes are tracked to prevent a mix up of user actions and commands as well. A visualization of the audit trail at the widget level is also possible for post collaboration analysis.

In certain situations, the deployment and benefits of iDwidgets may be debatable. For example, in the event of using large table for interaction; the possibilities for every collaborator to touch the single instance of the widget are much less pronounced due to the difficulties involved in touching the widgets located further away. Moreover, it is not clearly defined what happens, when several users click the same widget simultaneously or a user identity cannot be discovered. The iDwidgets are a new implementation of old collaborative ideas, but face certain challenges at the hardware and the software level. The hardware requirements involve possibilities of user detection, identification and differentiation. Capacitive Coupling, RFID, biometrics, etc need to reach cost effectiveness and better performance for an optimum large scale implementation.

Future Work

2

12

2.3

Motivation

Conflicts

Resolution

Example

Current Work

Coordination Policies Beyond Social Protocols

As we have seen in chapter 1, in a collaborative system multiple users may act at the same time. These actions have to be coordinated and in the case that access to limited resources is needed, concurrency control must be performed. In daily life, co-located people often coordinate with each other automatically by commonly known rules, which are called social protocols. For example, one usually waits for a person to stop talking before talking oneself or one does not take objects already possessed by another person. Morris et al. [2004] state that when multiple users work together collocated on a shared interface, we can not necessarily rely on these social protocols. They may be violated by accident, by intention, or just by system errors. This fact is supported by the problems Morris encountered, while observing people using different multi touch table applications. Since it is important that applications react in a predictable way, Morris proposes to include appropriate coordination techniques to the interface to assure deterministic behavior. Morris categorizes different conflicts into three different types. Global conflicts affect the application as a whole, like changes to the global view. Whole-element conflicts are related to a single object, like handling the same document or controlling the same menu. Sub-element conflicts occur when the same item is modified by multiple users and modifications are conflicting. To find solutions to the different types of conflict, Morris introduces three kinds of conflict resolution policies. The proactive policy allows a conflicting element’s owner or the initiator of a global change to affect conflict resolution proactively. The reactive policy instead initiates a conflict resolution as soon as a conflict arises, which is affected by the initiator of an element’s access or the “victims” of a global view’s change. In the mixed-initiative policy the party to solve the problem is not fixed, but decision is done by considering additional information on parties. The different combinations of conflicts and possible resolution policies result in a 3 × 3-matrix that represents a design space for coordination policies. Morris enumerates examples for coordination policies and sorts them into the design space. The rank policy, for example, is a mixed-initiative policy and solves both global and whole-element conflicts. In this policy, a conflict is resolved to the advantage of the person with higher rank. Examples for policies to proactively resolve whole-element conflicts is the sharing policy, where the owner explicitly defines those users that are allowed to access the element. A reactive solution to a global problem is the voting policy, where users that are af-

2.4

Video Manipulation in a Collaborative Task

Global Whole-Element Sub-Element

Proactive anytime sharing override

Mixed-Initiative rank rank rank

13 Reactive voting duplicate merge

Table 2.1: Policies’ design space and examples fected by a global change agree or refuse this change. A simple reactive policy to a whole-element conflict is to duplicate the element. Examples for sub-element conflicts are not explicitely given by the authors, but one can think of a conflicting situation in a svn project. When a programmer tries to update his source code, but the update is conflicting with his own changes, then he could override the conflicting elements with the own ones, which would be proactive. Another approach is to merge the versions to a non-conflicting one, which is a reactive approach. And again, the rank policy is a possible solution, where the source code from the programmer with higher rank is kept up. This would be mixed-initiative. Other examples of policies can be looked up by Morris et al. [2004]. The matrix and the exemplary entries to the design space are visualized in table 2.1. It is very likely that the proposed policies can be reused in different applications, so a future work of Morris is to integrate implementations of the different policies to a toolkit. This toolkit eases the implementation of different kinds of applications, which may also go beyond tabletop applications. Furthermore, the authors expect the design space to be refined and other policies to be found in further exploration of multi-user coordination.

2.4

Future Work

Video Manipulation in a Collaborative Task

In Ranjan et al. [2006], the authors provide us with “an exploratory analysis of partner action and camera control in a video-mediated collaborative task”. The motivation of the work is the optimization of camera control for a collaborative task, in which a “helper” observes a “worker” fulfilling a task and giving advice for further actions. Previous work suggests, that video transmission best bases on one single video captured by one single camera, but there are different approaches to camera position and movement. In their experiments, the authors choose the setup as follows. The

Previous Work

Implementation

14

2

Current Work

Figure 2.3: The setup of the studies (Ranjan et al. [2006])

worker has to build little LEGO figures, where his only support are comments received from the helper via audio channel. The worker’s hand movements are tracked to map themselves to camera movements. The helper has a guideline to build the figures and gives his instructions according to the current worker’s state, which is transmitted via video. The video is captured via one single camera in front of worker. 23 pairs of volunteers are divided into three groups, where the setup for each group differs with respect to camera control. In one setup, the camera position is fixed, in a second scenario the camera may be moved or zoomed by the helper, and in a third scenario the camera may be moved or zoomed by an intelligence simulated by an operator, who manipulates the camera in a way he considers best. Additionally, the helper may instruct the moderator to move the camera. The time each group needs to fulfill the building task is measured, the number of errors is logged, and the protagonists are requested for selfreported effectiveness. Results

The experimental results support the hypotheses, that total time is best for operator controlled camera. The reduction of total number of errors in the operator controlled scenario is also supported by the results. Finally, the subjective performance of the groups is reported as highest for the helper controlled camera. So it is very likely that an automatically controlled camera supports a collaborative guidance task best. Second contribution of the experiment is the examination of correlations between the worker’s movements and camera movements. Finally, the authors have a closer look at the camera movements and the according motivations. They conclude that about half of the move-

2.5

Handheld Projectors

15

ments are intended to establish a focus of attention (follow the hand or zoom in), whereas the other half of the movements contributes to monitoring the task (global overview and progress). This suggests, that according to the helper’s current needs the video can be adapted, where just one single camera is needed and the workload of choosing between multiple pictures is canceled. The authors conclude, that the experiment suggests the value of an automatically controlled camera, since it reduces the action time, reduces errors, and additionally allows building up a common ground between helper and worker with low effort for the two parties. But their findings base on a little experiment, so findings have to be supported by a larger study. Besides, there is still a problem in providing the helper with an optimal view. Due to the fact that the worker adapts his actions to what the helper can see, the needed camera movement to show the helper what he needs to see depends on the worker’s behavior, and thus on what the helper sees at the moment.

2.5

Future Work

Handheld Projectors

In this chapter, we present a work from Cao et al. [2007]. Starting from a previous work on a single user based handheld projector done by Cao and Balakrishnan [2006], the authors discuss the extension of this device to collaborative realms with multi-user and co-located interaction. They use handheld projectors like a flashlight to look at the data spread all over the world. Each projection displays the various entities with respective access modalities. Their system provides multiple possibilities of using the direct manipulation metaphor. For example,their system can be used for conducting file exchange between multiple users by pointing to the desired file they want to exchange. This can be extended by the possibility of dropping into personal folders (called portals here) for further access. Multiple projectors can be overlayed to ”complete the picture” and give a fuller experience. This can be used for expanding the display area to create a larger single projection out of multiple projections without sacrificing the resolution. They also used it for combining multiple views into single view. For example, people can share their calendars by merging their calendars into a single calendar and then see the times when they can meet up to make a new entry in a single calendar. They

Previous Work

Examples

16

2

Current Work

Figure 2.4: Collaborators build up a context display with a focus in it (Cao et al. [2007]) can then dissociate themselves with each other and carry on with their new calendar copies with the older details and the new meeting plans. They address an inherent property of the projectors - change in the projection size and brightness with the distance from the projection surface - and use it for several applications. So, one user can use projector to show a complete low detailed image of the object. The other user can move the projector through it and show the detailed view with the surface enlightened with his portion of the projection. This has been further extrapolated, as the possibility of allowing collaborators to only see the classified details, when they all have projected onto the same particular area of the projection surface exists. Once done, this results in ”unlocking” the possibilities of appending or correcting. They realized snapping between two objects by creating linkages between them. For example, snapping two maps showing addresses would result in showing the directions between the two. Users can further dock their similar type objects together to change the appearance. They realized spatial relationships between users according to the spatial location of the projectors. When a user is viewing private data, and on the advent of another user in close range, this data is blurred. Authors successfully performed the preliminary user testing. They indicate, that all the participants grasped the system concepts quickly, and did not show any difficulty learning the interaction techniques. However, they mention a problem in the participants’ experience which was affected by the imperfect alignment between projectors as well as the

2.6

Enforcing Collaboration - Sharing the Squid

somewhat jittery projection caused by non ideal image update rate (25 Hz). However, they suggest that these can be reduced with technical advances. They mention that users also had some reservations about projecting private information in public space, requiring better system design.

2.6

17

Future Work

Enforcing Collaboration - Sharing the Squid

Most of the papers that we have discussed have been about users wishing to collaborate together primarily and using technology to satisfy their requirements and this wish to collaborate. In this section we present a paper that departs from this notion and works in the opposite direction. Stern et al. [2008] present how technology can provide interfaces which force unwilling users to collaborate together. The paper presents a tangible collaborative interface by Stern et al. They made the interface out of a soft toy in the shape of a squid which can only perform optimally when people holding the tentacles collaborate and help the group reach the common goal. This, hence according to them, helps improve team spirit and increases the multi player, team based work culture while people work in a joyful setting. The paper investigates the possibility of building this cohesiveness by sharing audiovideo media in a process of storytelling. So,they divide the setting into two main components - the sensors laden squid to collaborate and the audio-visual game.

Motivation

Implementation

The Game: Up to five participants can play this game simultaneously. Each participant holds one tentacle of the squid. The goal is to make a meaningful relationship between two of the three objects shown on the display. Each end of the tentacle has a different control/sensor and hence, a different way to control the entire experience. One of the users may use a sensor to refresh the contents of the display. The other user can use another sensor like a light sensor to change the relationship between the selected objects, or to select or reorganize the order of the objects. The Squid: Each tentacle of this toy has a different sensor or controller attached to it and hence, affords the possibility that only one user at a particular time gets the control over one part of the collaboration scheme. Stern et al built in various sorts of sensors. A RFID sensor, which is located in the head of the squid, can keep a track of the play-

Design

18

2

Current Work

Figure 2.5: Here we can see the squid in action (Stern et al. [2008]) ers collaborating with the squid individually, because a user should first authenticate itself by scanning its personal RFID bracelet with the head of the squid. This allows the system to personalize the experience after learning for certain time about user habits and preferences. Sensors located in the tentacles include a potentiometer, pinch sensor, velcro pads, end sensor, and Light Dependent Resistor. The potentiometer is used as a knob that converts analog data to digital values. The knob can be turned to highlight and select any of the displayed media objects. The Bend Sensor is used to detect the level of confidence in the decision made regarding the display. The higher the bend, the more the confidence. Light Sensor, based on a Light Dependent Resistor(LDR) is used to detect the level of light over a certain value. It can be used to renew the relationships by moving closer to light sources like the lamps, etc. Velcro pads act as switches which when opened and closed lead to refreshing the display with new media objects. Pinch sensor is similar to the Velcro Pad. Result

Since, their design requires maneuvers which can not operate these sensors explicitly, the participants have to go beyond just the actions on the squid. They must talk with each other and gather the feedback from the screen to be successful. This, according to them, leads to higher levels of collaboration and hence, users collaborate in a much more relaxed setting. The authors conducted a preliminary user study with eleven volunteers to observe the utility of their device and also to know possibilities of further improvements. The sessions were short and video recorded and were complemented by an interview session after the

2.7

Reviewing Mechanisms

game. A good thing about the study was the mix of the users. Some knew each others while others were strangers and hence, provided a good preliminary data. Users gave a positive feedback, although it was observed that direct eye contact did not exist necessarily. The authors intend to further develop and refine the squid by arming it with new sensors, that offer both digital and analog values or provide haptic feedback etc.

2.7

19

Future Work

Reviewing Mechanisms

The presented work so far is dedicated to supporting collaboration in runtime by new collaborative techniques and interfaces. However, in this section we focus on reviewing mechanisms for collaboration. DiMicco et al. [2006] suggest the visualization of user activities, which were recorded during the collaboration, after the collaboration ended. They suggest that visualization can help gauge both the level of interaction the group enjoys and the roles individuals play in the group dynamics to locate the strong and the weak players. The authors mark a departure from their previous research, which involved a shared display of real time group dynamics to a more private report of individual performance for reviewing. The system, which is called Second Messenger, tracks the vocal inputs of the mediators through noise canceling microphones, and each time a mediator talks, this is logged with the corresponding time-stamp. The log data is used to create the following four visualizations: • Ongoing Participation Level is a histogram showing bars, that reflect the relative amount of talking done by each user. • Turn-taking Patterns is a way to show the relative current position of the user in the conversation. Each user is depicted by a different colored ball. A ball going up vertically shows an increase in number of turns taken by the user and vice versa. • Overlapped Speech is a more collaborative visualization which shows the speakers as small circles growing in size with the amount of participation. Each circle is then segmented into pie sections according to the amount of speech by the others. This might mean interruptions or even quite contrary, like joint laughter.

Motivation

Design

20

2

Current Work

Figure 2.6: A variant of review visualizations: Ongoing participation level (DiMicco et al. [2006]) • Floor Control is a timeline visualization which can be played back to observe who spoke at what moment and in case of overlapped speech, who gained control over the other. Results

Relevant Works

The visualization was tested with eight subjects in the context of two meetings of 20 minutes each. The time spent on understanding the visualization was close to 20 minutes which is identical to the time needed to watch the video. But the authors expect the time to interpret of visualizations to decrease as the familiarity with the system grows. The reportings were compared with original videos. Certain aspects like level of interruption, turn-taking, and individual trait evaluation results proved to be consistent. So the tool works better in revealing extreme behavioral differences. Users were also able to detect from speaking patterns alone the side of a debate that each person fell on. Bergstrom and Karahalios [2007] go beyond a further step and provide anonymous voting and results of the same and visualization in sets of four collaborators in real time. Positive anonymous voting strengthens the collaboration at that reference point. Negative voting may reflect the sidetracking of the conversation. Kim et al. [2008] go beyond simple voice based visualization and suggest sociometric badges, which measure body movement like gestures, walking, real time speech features like proximity data, indoor user localization, and face to face interaction using IR. The badges are then correlated and results are finally visualized on mobile phones.

21

Chapter 3

Future Work We would like to end this report by enumerating a few possibilities where more research can be performed in the realm of collaborative user interfaces. The research we came across was usually designed and developed for dedicated scenarios. These are controlled environments which make a lot of assumptions about the user behavior while collaborating. It might be worthwhile to determine common guidelines of how people really collaborate with each other. These guidelines can provide a generic and reusable framework, which can be referenced after developing any collaborative user interface. This might ascertain the quality of afforded collaboration before expensive user testing may be performed. Most of the user studies are done in groups of two collaborators. However, in real life, collaboration happens usually amongst a larger set of individuals. This might complicate the simplified view with just two collaborators. Hence, it might be beneficial to involve a larger set of individuals to collaborate with each other. This would be much more difficult, expensive, and would involve more time. But this might help reconfirming the quality of the user interface even more. The Squid is in early stages of research. However, preliminary studies have been encouraging. We might benefit from more usage scenarios and rigorous user testing. These can involve different settings like collaboration with strangers, collaboration with team members who know each other well. Instead of the mentioned qualitative user testing, it

22

3

Future Work

might be beneficial to have meaningful quantifiable results, if possible. Reviewing mechanisms using visualizations are also in early stages of research. Unfortunately, they do not include the visualization of the complete environment. The focus is only on the speech. We believe that it might be interesting to notice other aspects of the collaboration to make a more informed decission over group dynamics. These might include user activities like logging the cursor movement on their personal screens. For example, higher levels of movement might reflect higher levels of disinterest in the collaboration. Besides the above mentioned suggestions, some possibilities were presented by the authors of the selected papers as well. These have been summarized in our previous sections respectively in the last paragraph for each paper.

23

Bibliography Pedro Antunes and Nuno Guimaraes. Multiuser interface design in cscw systems. Technical report, 1994. Tony Bergstrom and Karrie Karahalios. Conversation votes: enabling anonymous cues. In CHI ’07: CHI ’07 extended abstracts on Human factors in computing systems, pages 2279–2284, New York, NY, USA, 2007. ACM. ISBN 978-1-59593-642-4. doi: http://doi.acm.org/10. 1145/1240866.1240994. Richard A. Bolt. “put-that-there”: Voice and gesture at the graphics interface. In SIGGRAPH ’80: Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pages 262–270, New York, NY, USA, 1980. ACM. ISBN 0-89791-021-4. doi: http://doi.acm.org/ 10.1145/800250.807503. Xiang Cao and Ravin Balakrishnan. Interacting with dynamically defined information spaces using a handheld projector and a pen. In UIST ’06: Proceedings of the 19th annual ACM symposium on User interface software and technology, pages 225–234, New York, NY, USA, 2006. ACM. ISBN 1-59593-313-1. doi: http://doi.acm.org/10.1145/ 1166253.1166289. Xiang Cao, Clifton Forlines, and Ravin Balakrishnan. Multi-user interaction using handheld projectors. In UIST ’07: Proceedings of the 20th annual ACM symposium on User interface software and technology, pages 43–52, New York, NY, USA, 2007. ACM. ISBN 978-1-59593-679-2. doi: http://doi.acm.org/10.1145/1294211.1294220. Joan Morris DiMicco, Katherine J. Hollenbach, and Walter Bender. Using visualizations to review a group’s interaction dynamics. In CHI ’06: CHI ’06 extended abstracts on Human factors in computing systems, pages 706–711, New York, NY, USA, 2006. ACM. ISBN 1-59593-298-4. doi: http://doi.acm.org/10.1145/1125451.1125594.

24

Bibliography Clarence A. Ellis, Simon J. Gibbs, and Gail Rein. Groupware: some issues and experiences. Commun. ACM, 34(1):39–58, 1991. ISSN 00010782. doi: http://doi.acm.org/10.1145/99977.99987. Douglas C. Engelbart. Collaboration support provisions in augment. In OAC ’84 Digest: Proceedings of the AFIPS Office Automation Conference, pages 51–58, February 20-22 1984. Douglas C. Engelbart and W. English. A research center for augmenting human intellect. In Proceedings of the AFIPS Fall Joint Computer Conference, pages 395–410, 1968. Taemie Kim, Agnes Chang, Lindsey Holland, and Alex (Sandy) Pentland. Meeting mediator: enhancing group collaboration with sociometric feedback. In CHI ’08: CHI ’08 extended abstracts on Human factors in computing systems, pages 3183–3188, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-012-X. doi: http://doi.acm.org/10. 1145/1358628.1358828. Meredith Ringel Morris, Kathy Ryall, Chia Shen, Clifton Forlines, and Frederic Vernier. Beyond ”social protocols”: multi-user coordination policies for co-located groupware. In CSCW ’04: Proceedings of the 2004 ACM conference on Computer supported cooperative work, pages 262–265, New York, NY, USA, 2004. ACM. ISBN 1-58113-810-5. doi: http://doi.acm.org/10.1145/1031607.1031648. Donald A. Norman. The Design of Everyday Things. Basic Books, September 2002. ISBN 0465067107. URL http://www.amazon.ca/exec/obidos/redirect?tag= citeulike09-20\&path=ASIN/0465067107. Abhishek Ranjan, Jeremy P. Birnholtz, and Ravin Balakrishnan. An exploratory analysis of partner action and camera control in a videomediated collaborative task. In CSCW ’06: Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work, pages 403–412, New York, NY, USA, 2006. ACM. ISBN 1-59593-249-6. doi: http://doi.acm.org/10.1145/1180875.1180936. Kathy Ryall, Alan Esenther, Clifton Forlines, Chia Shen, Sam Shipman, Meredith Ringel Morris, Katherine Everitt, and Frederic D. Vernier. Identity-differentiating widgets for multiuser interactive surfaces. IEEE Comput. Graph. Appl., 26(5):56–64, 2006. ISSN 0272-1716. doi: http://dx.doi.org/10.1109/MCG.2006.108. M. Stefik, D. G. Bobrow, G. Foster, S. Lanning, and D. Tatar. Wysiwis revised: early experiences with multiuser interfaces. ACM Trans. Inf.

Bibliography Syst., 5(2):147–167, 1987. ISSN 1046-8188. doi: http://doi.acm.org/ 10.1145/27636.28056. Rebecca Stern, Aisling Kelliher, Winslow Burleson, and Lisa Tolentino. Sharing the squid: tangible workplace collaboration. In CHI ’08: CHI ’08 extended abstracts on Human factors in computing systems, pages 3369–3374, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-012X. doi: http://doi.acm.org/10.1145/1358628.1358859. Chengzheng Sun and Clarence Ellis. Operational transformation in real-time group editors: issues, algorithms, and achievements. In CSCW ’98: Proceedings of the 1998 ACM conference on Computer supported cooperative work, pages 59–68, New York, NY, USA, 1998. ACM. ISBN 1-58113-009-0. doi: http://doi.acm.org/10.1145/289444. 289469. Chengzheng Sun, Steven Xia, David Sun, David Chen, Haifeng Shen, and Wentong Cai. Transparent adaptation of single-user applications for multi-user real-time collaboration. ACM Trans. Comput.-Hum. Interact., 13(4):531–582, 2006. ISSN 1073-0516. doi: http://doi.acm. org/10.1145/1188816.1188821. David Sun, Steven Xia, Chengzheng Sun, and David Chen. Operational transformation for collaborative word processing. In CSCW ’04: Proceedings of the 2004 ACM conference on Computer supported cooperative work, pages 437–446, New York, NY, USA, 2004. ACM. ISBN 1-58113810-5. doi: http://doi.acm.org/10.1145/1031607.1031681. Steven Xia, David Sun, Chengzheng Sun, David Chen, and Haifeng Shen. Leveraging single-user applications for multi-user collaboration: the coword approach. In CSCW ’04: Proceedings of the 2004 ACM conference on Computer supported cooperative work, pages 162– 171, New York, NY, USA, 2004. ACM. ISBN 1-58113-810-5. doi: http://doi.acm.org/10.1145/1031607.1031635.

25

Typeset November 20, 2008

Collaborative User Interfaces

Nov 20, 2008 - time-critical situations. In this case, a chat or a video conference is more ... rative interfaces, that have different solutions to problems. In the next.

1MB Sizes 4 Downloads 303 Views

Recommend Documents

Radiotelephones having contact-sensitive user interfaces and ...
Mar 11, 2005 - cause the display or the radio-telephone communications transceiver to perform a ..... menu or a list of telephone numbers that are stored in the.

Radiotelephones having contact-sensitive user interfaces and ...
Mar 11, 2005 - numbers or other data for processing by the radiotelephone. A ... display information stored in memory located in the radio telephone.

From Usability Tasks to Usable User Interfaces
process in order to develop UIs with usability. Author Keywords. Usability ... [Software. Engineering]:. Requirements/Specifications – elicitation methods. H.5.2 ... agile. Like most use case descriptions, such as essential use cases. [Constantine 

Adaptive Graphical User Interfaces Design - IJRIT
interface for a menu-driven application [4]. ... had the Microsoft Smart Menus adaptation turned on, while the customized version had that adaptation turned off.

Second Surface: Multi-user Spatial Collaboration ... - Fluid Interfaces
MIT Media Lab. Cambridge ... display interactive digital content on top of print media [Layar][ ... content can be captured as a digital photo and published on social .... 9, No. 2, pp. 1-20. WANGER, D. 2009. History of Mobile Augmented Reality,.

Client-server architectures and methods for zoomable user interfaces
Jun 3, 2005 - data in communication networks, e. g., cable networks and/or interactive ...... (as represented by block 700) or from a local hard disk drive. 702.

Poster: Integrating rich user interfaces with real systems - EWSN
top of the application. This prevents from offering a real user experience beyond data plotting in the cloud. For instance, how to build a single interface to monitor ...