In 2008, a group of HCI researchers from Tufts Department of Computer Science and MIT Media Lab proposed the notion of Reality-Based Interaction (RBI) as a concept that ties together emerging human-computer interaction styles. Based on this concept, they provided a framework that can be used to analyze and understand new interfaces and their interaction techniques. Here I’ve used their framework to analyze the reacTable TUI.
Reality-Based Interaction (RBI) Themes
The reacTable’s basic interaction techniques build directly on the user’s knowledge of naive physics (NP) and physical space (EAS). To add a controller to the system, the user simply picks up a puck and places it on the surface. The user can move the puck across the surface or rotate it. Doing so will result in digital visual feedback displayed upon the physical surface and audio feedback from the speakers placed in the physical environment (EAS). Because the reacTable surface is round, the user can move his or her body around the TUI to change viewpoints, which leverages the user’s ability to move his or her own body to different positions in the environment (EAS). Because there are no privileged points-of-view or points-of-control, reacTable also encourages collaboration and competition between multiple users, drawing on their social interaction skills (SAS).
Designed to provide an immersive experience for the user and onlooking audiences, the reacTable utilizes a large, table-top surface with a camera, projector, speakers, visual and audio synthesizers, and many connection tools. They also defined the interaction environment, typically a large room, where there are no other lights or noises to distract the user interacting with reacTable. By making such decisions, the designers, by trading practicality for realism, incurred some space, size, and power consumption costs.
The reacTable TUI strikes a nice balance between reality and other qualities including expressive power, efficiency, versatility, ergonomics, and accessibility.