Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The S-BAN: Insights into the Perception of Shape-Changing Haptic Interfaces via Virtual Pedestrian Navigation

The S-BAN: Insights into the Perception of Shape-Changing Haptic Interfaces via Virtual... The S-BAN: Insights into the Perception of Shape-Changing Haptic Interfaces via Virtual Pedestrian Navigation Short Title: Perception of Shape-Changing Haptic Interfaces ADAM J SPIERS Max Planck Institute for Intelligent Systems and Imperial College London, a.spiers@imperial.ac.uk ERIC YOUNG Max Planck Institute for Intelligent Systems, yoeric@is.mpg.de KATHERINE J KUCHENBECKER Max Planck Institute for Intelligent Systems, kjk@is.mpg.de Screen-based pedestrian navigation assistance can be distracting or inaccessible to users. Shape-changing haptic interfaces can overcome these concerns. The S-BAN is a new handheld haptic interface that utilizes a parallel kinematic structure to deliver 2-DOF spatial information over a continuous workspace, with a form factor suited to integration with other travel aids. The ability to pivot, extend and retract its body opens possibilities and questions around spatial data representation. We present a static study to understand user perception of absolute pose and relative motion for two spatial mappings, showing highest sensitivity to relative motions in the cardinal directions. We then present an embodied navigation experiment in virtual reality. User motion efficiency when guided by the S-BAN was statistically equivalent to using a vision-based tool (a smartphone proxy). Although haptic trials were slower than visual trials, participants heads were more elevated with the S -BAN, allowing greater visual focus on the environment. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI) → Interaction devices → Haptic devices • Hardware → Emerging technologies → Emerging interfaces • Human-centered computing → Ubiquitous and mobile computing → Empirical studies in ubiquitous and mobile computing Additional Keywords and Phrases: Haptics, Navigation, Shape-Changing Interfaces 1 INTRODUCTION Smartphones and GPS technology have revolutionized the way that people travel by vehicle and on foot. As pedestrians in the modern age, we can generally eschew paper maps for a multi-purpose pocket-sized device that can guide us to unfamiliar locations around the world. Though both revolutionary and beneficial, such navigation technology primarily interfaces with users through screens and audio cues, which have limitations. Although smartphone screens are capable of displaying information-rich maps annotated with suggested routes, numerous studies have shown these displays to be highly distracting to drivers and pedestrians [2,18,29,30,49,50]. Such distraction causes dangerous loss of attention that can lead to accidents and hospital admissions [30,52,55]. Furthermore, for individuals with vision impairments, screen-based interfaces are inaccessible. An obvious alternative has been to deliver navigation information through audio, which often requires the use of headphones in busy urban spaces. Unfortunately, such systems can diminish a vision-impaired (VI) users ability to perceive and appreciate their environment [5,19,51], while also limiting social interactions (a major factor in the abandonment of assistive technologies [3,34,42]). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2022 Copyright held by the owner/author(s). 1073-0516/2022/1-ART1 $15.00 http://dx.doi.org/10.1145/3555046 ACM Trans. Comput.-Hum. Interact. Furthermore, the obscuring of ambient sounds can have a detrimental effect on navigation and localization, as such sounds can highlight hazards and be used as spatial landmarks [6,19]. Touch is an alternative sensory modality that can be used to communicate navigation information to sighted, vision-impaired and deaf-blind individuals, enabling the potential for developing inclusive navigation aids that are accessible and useful to multiple demographics, rather than specialized assistive technologies for VI persons. Haptic feedback is particularly appealing for pedestrian navigation interfaces given the less critical role of the sense of touch during walking (compared to sight and hearing). Indeed, Spiers and Dollar recently highlighted that the most long-standing VI navigation aids (the guide cane and guide dog) are both haptic interfaces, providing mechano-tactile cues to the user via their grip on a handle or harness [45]. For many decades, researchers have considered the potential of haptic devices as navigation tools, with a focus on using vibration-based stimuli from eccentric rotating mass (ERM) actuators to indicate directions to walk or obstacles to avoid [19,24,35,38,40]. Though ERM vibration is a simple, compact and cost-effective method of delivering haptic cues, it too has limitations [9,48]. Oakley and Park pointed out that the attention-grabbing nature of vibrotactile cues has cemented their success in providing cell-phone alerts for events of high importance, such as an incoming phone call [32]. However, as anyone who has ever disabled their phones audio and vibration alerts due to an overly active chat group would know, such cues quickly become tiresome if the messages are in fact not of high importance. In the case of pedestrian navigation guidance, information is generally provided frequently over periods of tens of minutes. In these cases, attention-grabbing vibrotactile haptic cues can soon become irritating and distracting, as has been observed in several studies [28,44,48,58]. Vibration also typically cannot convey a direction on its own, requiring the use of multiple discrete actuators that must all touch the skin, and limiting the spatial resolution of the conveyed information. Obviously, vibration is not the only way that humans perceive touch. Spiers and Dollar previously argued that humans can adeptly perceive shape with their hands and that this haptic modality incurs relatively low cognitive load, given the subtle capabilities of shape perception demonstrated in daily life [45]. These properties make shape change a compelling interface solution for the task of providing navigation cues to users. This hypothesis was confirmed by testing navigation cues from equivalent shape- changing and vibrotactile handheld devices in an embodied navigation study [44]. In that work, the haptic shape-changing device was the A nimotus, a segmented cube with dedicated actuators for rotating and extending the upper half of its body relative to the bottom; body rotation was used to communicate direction cues, while body extension communicated distance cues. Shape-changing interfaces belong to a relatively young, yet diverse field of HCI, within which only a small subset of devices possess sufficient force capability to output haptic cues [1]. Many shape-changing interfaces and related research focus only on visual feedback, e.g. [8,20,36,39]. Of the subset with haptic output capability, many devices are desk-based or desk-sized, due to large actuator volumes that prevent portability [4,12,47]. Consequently, portable haptic-output, shape-changing devices (e.g. [15–17,26,45]) are ACM Trans. Comput.-Hum. Interact. sparse in the literature. The most similar comparisons to such systems are wearable or holdable portable devices that utilize other mechanotactile modalities to provide spatial cues, e.g. skin stretch, indentation, squeezing, dragging, asymmetric torque, the gyroscopic effect and weight shift [10,14,15,33,48,53,54]. Some other novel handheld devices utilize changes in center-of-mass, air-drag or weight distribution to generate passive dynamic haptic sensations, meaning that the user must move the device through space to sense the variation in properties [41,56,57]. These systems have been developed with the intention of making virtual reality (VR) controllers feel more like interactive objects in VR gaming scenarios. Example objects are swords, shields, crossbows and guns that the user holds and moves around. Note that such systems provide non-spatial, egocentric haptic information and so are not suited for navigation applications. In comparison to many of the above systems, users of shape-changing systems are able to feel the relative change of a system (as it transitions from one shape to another) in addition to the absolute shape of the system, irrespective of motion. The latter is particularly interesting as it enables a system to continue to convey information without applying any active stimulus to the user, which is not the case with vibration-based systems. This feature also means that a shape-changing interface may be re-grasped without loss of information: for example, users of the shape-changing Animotus device were able to release and re-grasp the interface to physically explore set-pieces as part of an immersive theatre experience [27]. We believe that the scarcity of haptic shape-changing interfaces is a result of 1) the relative difficulty of designing and fabricating these mechatronic systems (compared to, for example, outputting vibration via ERM motors) and 2) a lack of data on how such devices are perceived by users (again, compared to the extensive literature on vibration stimuli [23,27]). In this paper, we contribute to the field of haptic shape-changing interfaces and non-visual navigation guidance with a new device whose form factor and output capability outperform previously published designs. Furthermore, we characterize the devices properties via a perceptual study to understand how people interpret dynamic shape cues and a VR navigation study to accurately compare user performance when using shape-changing devices vs. visual modalities, including a smartphone proxy. 1.1 Device Design The new device (Figs. 1 & 2) is called the S-BAN (Shape-Based Assistance for Navigation). Rather than having dedicated actuators for each degree of freedom (DOF), as in [17,26,43], the S-BAN uniquely uses a parallel kinematic scheme to create a continuous two-dimensional workspace (Fig. 2) that is more analogous to desktop haptic interfaces such as the Pantograph MK-II [7] or the Phantom family of devices [25]. The continuous workspace of desktop devices allows flexibility in haptic rendering applications; the continuous workspace of the S-BAN is similarly intended to allow exploration of various spatial rendering options, two of which are tested here. The S-BAN is open source and easy to 3D-print and assemble (CAD file downloads and assembly instructions may be found at https://hi.is.mpg.de/research_projects/S-BAN and are also attached to this paper as supplemental materials). We therefore hope that others will use (and ACM Trans. Comput.-Hum. Interact. potentially modify) the platform to explore additional mappings that may be suited to other data representation (for example navigating data in abstract dimensions or playing video games). Furthermore, unlike previous systems, the S-BAN can render spatial cues behind the user due to its novel tactile notches (Fig. 1). The S-BANs parallel kinematic design allows compact, side-by-side actuator placement, leading to a slim and elongated form factor that may be held like a flashlight, a tried and tested ergonomic design suitable for extended periods of use. The flashlight holding posture negates the awkward arm pose necessary for use of the Animotus haptic device, which made some users self-conscious [44] and led to incorrect device grasps and arm fatigue [46]. ACM Trans. Comput.-Hum. Interact. Figure 1: The S-BAN is a 2DOF navigation device that can extend and pivot its end effector relative to its handle. The user is able to feel both the change in overall device shape and the relative alignment of notches on the sides of the device. Figure 2: The S-BAN in a users hand illustrating several poses (shapes) using the Mid-Point kinematic mode. These poses cover an extension change of ±5 mm and an angle change of ±17 deg. The design of the S-BAN combines pragmatic physical constraints and a conjecture on shape perception. The physical constraints were centered on implementing the desired 2-DOF end-effector mechanism in a handheld package with sufficient forces to move a users fingers across the given workspace. The conjecture aspect was that predictions were made on how the sensations generated by such a device would be perceived by users given sparse past literature. Indeed, we consider the S-BAN a ACM Trans. Comput.-Hum. Interact. prototype that occupies only a small region of the vast and largely unexplored design space of shape- changing haptic interfaces. This space covers factors such as perceptual quality, form factor and tactile aesthetics. The slim design and comfortable holding posture of the S-BAN are intended to enable future integration of the technology into existing travel aids, such as guide cane handles or smartphone cases (Fig. 3), where it may enhance such systems by providing haptic shape-changing feedback. Though guide canes also provide haptic feedback (by transmitting impacts, forces and vibrations), we do not believe that there will be interference with the shape-changing feedback of the S-BAN, due to the distinction in haptic modalities. Furthermore, past work has shown successful integration of other haptic modalities into guide cane devices without haptic sensation interference [2,13,17,22,55]. Smartphone integration is suggested primarily to avoid having to carry and interact with two separate devices (a smartphone and an S-BAN) but could also facilitate the use of visual cues (for sighted persons) or audio cues (for sighted or VI persons) to reinforce or supplement shape-based guidance. Figure 3: Conceptual illustrations of future integrations of the S-BAN concept with (left) a guide cane for vision- impaired users and (right) a smartphone case. Both devices use shape to haptically communicate spatial guidance commands without reliance on sight or sound. 1.2 Device Testing We provide thorough testing of the S-BAN in more detail than previously attempted with a portable shape-changing haptic interface. The typical psychophysical testing approaches used with many haptic interfaces become inapplicable when the interface has more than a single DOF [13]. Furthermore, though some experimental psychology literature exists on the haptic perception of shape by humans [21], these studies have not been extended to dynamic shapes, leading to further questions on how users will interpret shape-changing haptic stimuli. For example, it has not previously been determined if users are able to perceive dynamic shapes better in an absolute sense (i.e. identifying a pose) or a relative manner (i.e. identifying a change between poses). ACM Trans. Comput.-Hum. Interact. 1.2.1 Perceptual Study The above perceptual questions led us to perform absolute and relative static perceptual studies with the S- BAN, as reported in Sections 3 and 4. To showcase the flexibility of the S-BANs continuous workspace, these tests are both completed for two different kinematic mappings. The results show that shape perception does indeed depend on the employed mapping. They also indicate which mode is most effective for spatial information communication and inform our use of the device for pedestrian guidance. 1.2.2 Navigation Study Though Spiers and Dollar previously performed embodied guidance experiments to compare against non- visual vibrotactile systems [44], there has yet to be a comparison between shape-changing navigation interfaces and visual navigation solutions, which we achieve in this work (Sections 5 and 6). Visual feedback of spatial data via smartphones is a ubiquitous technology in pedestrian guidance. As such, we wish to test against this gold standard on our journey to creating non-visual guidance technology that will benefit both sighted and VI individuals. By utilizing VR for these studies, we avoid the accuracy issues of GPS + IMU localization system that adversely affected user experience in earlier outdoor [44] and indoor [46] navigation experiments. The VR setting also permits us to measure user attention to the handheld device and surroundings via headset pose measurements. In summary, we present the following contributions: 1. The S-BAN, a shape-changing handheld haptic interface that can produce pivoting and extending/retracting sensations across a continuous workspace, including behind the user. Our design enables the exploration of various kinematic representations of spatial information. Two example kinematic representations are presented in this work. 2. A perceptual study that measures how well users can perform absolute device pose (shape) and relative motion (between shapes) estimation tasks. The study evaluates the two example kinematic representations for both tasks and identifies the most favorable mapping. Areas of high/low sensitivity and user opinions of the device and study are also presented. 3. A navigation study conducted in virtual reality in which users are guided to targets with various types of visual and haptic shape-change feedback. User movement efficiency and task completion time provide measures of performance, while head pose informs us of visual attention focus. 2 MATERIALS Providing shape-changing feedback to a user requires an actuated physical device as well as a logical method for mapping navigation commands into device movements. 2.1 S-BAN Hardware The goal of the S-BAN is to aid human walking navigation by using touch to communicate movement instructions that will enable the user to reach navigational targets. For outdoor pedestrian-navigation applications, we envision these instructions will be generated by a smartphone application similar to ACM Trans. Comput.-Hum. Interact. Google Maps. We also propose that the S-BAN can be used to aid the navigation of virtual environments, as we show in an experimental scenario later in this paper. As determined in [25], providing both direction and distance to a navigational waypoint greatly improves the navigation performance of users over either of these components independently. We build upon this prior work with the development of a new 2DOF device that utilizes a parallel kinematic structure to allow high force generation, a more ergonomic body and a continuous workspace that also communicates backwards motions (which were not possible with the device of [25]). The S-BAN structure (Fig. 4) centers around two linear servo actuators (Actuonix L12-30-50-06-I) contained in the handle portion of the device. These actuators are grounded via dowel pins in the proximal part of the handle and distally connected together via the end-effector linkage, which in turn is connected to the end-effector portion of the device. As the linear actuators independently extend and retract, the end effector can simultaneously pivot left/right and extend/retract, relative to the handle. This internal linkage movement is then perceived by the user as the overall shape of the device extending or retracting and bending/pivoting to one side or the other. Figure 4: An exploded view of the S-BAN. Motion is achieved via two linear actuators arranged in a parallel configuration. Located on either side of the end effector and handle are recessed tactile notches (Fig. 1) that align when the device is at its home position (the center of the workspace). These notches were add ed to the S- BAN following initial pilot studies, where it was observed that though users could feel changes in device pose, they struggled to identify whether the device was in front of or behind the home position. ACM Trans. Comput.-Hum. Interact. The overall elongated shape of the S-BAN is inspired by a hand-held flashlight, a simple physical design that may be held without discomfort for extended periods of time while navigating. We consider this to be an improvement over the Animotus, whose cube-shape led to awkward holding poses [45]. Contained within the end effector is an 8×8 LED array (manufactured by Adafruit) which can provide illumination through the 1.5-mm-thick top plate (as shown in Fig. 4). Though we do not use the LED array in the studies presented in this paper, it is intended to allow future comparison of visual vs. haptic cues within the same device in physical (non-VR) navigation applications. Future models of the S-BAN will also be created without the LED array to allow for a more compact end effector; the optimum length will be determined in planned studies. Also included in the S-BAN handle is a 9DOF IMU that can be used as a tilt-compensated compass in situations where external orientation measurements are not available (e.g. outdoors, when using GPS). In this paper we use the built-in tracking of the Oculus Quest VR headset to measure the orientation of the S- BAN during the navigation study (Section 5). The handheld S-BAN measures 190×50×25 mm in its fully extended pose and has a mass of 160 grams. For the current prototype, the supporting electronics (including a Bluetooth module, Arduino Nano and LiPo battery) are contained in a tethered enclosure (110×70×35 mm, 210 g) that either rests on the desk for static studies or is carried in a small shoulder bag for mobile applications. Future plans include more compact custom-built electronics that may be integrated into the main device body. We also plan to investigate the possible inclusion of a small eccentric rotating mass motor, which may be used for providing short alerts for immediate and dangerous hazards in real-world navigation, such as when the user must stop and wait at a road crossing. The ERM motor could also signal when a final destination has been reached, as is common in smartphone or in-vehicle navigation systems. This concept of augmenting the low cognitive demands of shape-change sensations with the alerting nature of vibration was previously proposed by Spiers and Dollar [45]. 2.2 Kinematics Control Scheme Selection The S-BAN uses a planar parallel kinematic configuration, with two actuators connected to a single end effector. A somewhat comparable structure may be seen in the Pantograph MK-II [7], a desk-based haptic interface with two base-mounted rotary actuators that drive a planar linkage that terminates in a single point. In contrast, the S-BAN uses linear actuators, and its end effector is a rigid body that both translates and rotates (Fig. 3). This arrangement means that the S-BAN uses the coordinated motion of its linear actuators to simultaneously change the angle and extension of its end effector relative to its base (the handle). The exact mapping between spatial information and actuator extension depends on the part of the device body selected as the kinematic control point, the point from which the target angle ( ) and target extension ( ) are measured. Given a combination of and as control inputs, we use inverse kinematic calculations to determine the necessary extensions of the left ( ) and right ( ) linear actuators to achieve those targets relative to the selected control point. ACM Trans. Comput.-Hum. Interact. There are several options for the control point, such as the tip of the end effector or the mid-point between the actuators, each resulting in a unique kinematic scheme. The choice of scheme influences the haptic sensations generated by the device and leads to different kinematic constraints. Given that there are no prior haptic devices like the S-BAN, selecting an appropriate scheme is not obvious. While designing the device and running initial pilot studies, we identified two kinematic options as the ones most likely to be easily interpreted by users, each focusing on different aspects of how the S-BAN can communicate. The first of these, named Mid -Point , uses the mid-point between the notches of the end effector as the control point. In the other scheme, named Leading -Notch, the mid -point between the notches is the control point only for movements with no lateral deviation (i.e. forwards and backwards motions only). When the device turns to the left or right, then the left or right notch, respectively (i.e. the leading notch) , becomes the control point. As seen in Fig. 5, these two schemes yield quite different device poses for the same control inputs. While Mid-Point considers the motion of the end effector and overall device shape more generally, the Leading-Notch mode attempts to highlight the tactile sensations from the S-BANs notches (Fig. 1). Figure 5: Two kinematic modes are investigated in this work: Mid-Point and Leading-Notch, which are named after the part of the S-BAN end effector used as the control point. The methods are further described in Appendix A. The inverse kinematic derivation for each method is detailed in Appendix A. 2.3 Workspace Differences The choice of inverse kinematic scheme influences the range of control input pairs (target angle, , and target extension, ) that the device can display. As shown in Fig. 5, the same target angle and target extension typically lead to different final devices poses for each scheme. The difference is illustrated in Fig. 6, where we can observe that each inverse kinematic scheme enables the device to reach a different chevron-shaped set of control input pairs, with many control inputs reachable by only one of the schemes, e.g. ). ACM Trans. Comput.-Hum. Interact. Figure 6: The reachable workspace of the haptic device (the blue chevron) is influenced by the choice of inverse kinematic scheme; the black dotted lines mark equally sized rectangular regions within the reachable regions of each scheme. The center of each reachable workspace (indicated with a + symbol) corresponds to the h ome pose, where the angle and extension communicated to the user both equal zero. To fairly compare the kinematic schemes within the perceptual studies, we define equally sized regions of the reachable workspace from each kinematic scheme. As indicated on Fig. 6, these regions cover a rectangular region of ± 5 mm and ± 17 deg. The centers of these rectangular regions correspond to the location where the angle and extension perceived by the user should both equal zero for the given kinematic scheme. These are considered as home poses and have b een marked on Fig. 6 with + symbols. Note that the home pose has a different vertical offset for the two kinematic schemes. As the home pose is associated with alignment of the tactile notches on the S-BAN, two different handle parts were created, with notches in different locations. These handle parts were swapped depending on the kinematic mode being tested in the perceptual experiments, which will be described in the following section. 3 METHODS A - PERCEPTUAL STUDY While the perceptual characteristics of common haptic stimuli are well investigated (e.g. [11,27] give detailed accounts of vibrotactile perception), the perception of dynamic (changing) shapes has very rarely been studied. The lack of published investigations in this area stems from both the scarcity and the non-uniformity of systems that can provide haptic stimuli of this type. The most related data comes from the study of human identification of shape when grasping or touching static objects [21,22,31]. Note that the typical psychophysical approaches used to test an isolated haptic stimulus [13] do not apply to the S-BAN due to the coupled and co- dependent nature of its two DOFs. To understand how users perceive the dynamic shape stimulus of the S- BAN, we undertook two static perceptual studies in which participants remained stationary and seated. These studies were designed to understand: ACM Trans. Comput.-Hum. Interact. 1. Which kinematic mapping option (mid-point or leading-notch) provides a more accurate representation of control inputs (target angle and target extension). 2. Whether users are more precise at identifying absolute device pose or relative motion between poses. 3. Opinions on the usability of this shape-changing device (e.g. pleasantness, confusion). In both experiments, the user sits in front of a computer screen with the S-BAN held in their dominant hand. During training, the S-BAN is visible to the user, while in the actual study a cardboard box covers the users hand and device. A numeric keypad under the non -dominant hand acts as an input device (Fig. 7). Figure 7: Arrangement of the perceptual study. Participants held the S-BAN in their dominant hand and entered pose choices via a cursor controlled by a numeric pad. This image shows a training phase. In the actual experiment, the user's dominant hand and S-BAN are covered with an opaque box. In the absolute experiment we investigate how well participants can identify the static pose of the S- BAN after it has moved from the home pose. In the relative experiment, we investigate how well participants can identify the relative motion made by the S-BAN as it moves between two arbitrary poses. The effective rectangular region of the S-BAN (Fig. 6) covers ±5 mm extension and ±17 deg rotation from the home pose, where the notches of the device align. The ±5 mm extension workspace refers to navigational targets in front of (+) and behind (–) the user. Navigational targets behind the user are useful in cases when a new route is being provided or when a user walks past their target. Given that the main use case of the S-BAN will be when targets are in front of the user, we have focused the perceptual experiments mostly on this region, as reflected in the vertically asymmetric workspaces. This reduced workspace allows fine sampling of interesting regions without significant increases to experiment time (which can have a detrimental effect on user fatigue and concentration). ACM Trans. Comput.-Hum. Interact. 3.1 Absolute Pose Perception Experiment The device workspace (±5 mm and ±17 deg) is divided into 35 discrete poses for the absolute experiment, as illustrated in Fig. 8 (left), where the vertical axis refers to device extension (1.67 mm divisions) and the horizontal axis refers to device rotation (5.67 deg divisions). The number of poses (and therefore the size of the divisions) in both the absolute and relative studies was based on a trade-off between sampling resolution and experiment time. As mentioned above, longer perceptual experiments risk a reduction in user concentration and therefore result validity. This is particularly true as the absolute and relative experiments were completed in the same session, taking an average of 1.5 hours. Figure 8: User input interfaces for the two perceptual studies showing options of device pose (left) or relative motion between poses (middle). Starting poses for the relative motion experiment are shown on the right. The vertical axis of the poses corresponds to device extension, and the horizontal axis corresponds to rotation. The device begins each trial in the home pose and then moves to a random pose. The participant uses the numeric pad (labelled with arrows) to move a square cursor to what they believe to be the pose of the device on the chart in Fig. 8 (left). Note that we implement a linear grid (as opposed to a curved grid) for the chart as a generic representation of two independent variables conveyed by the device. This technique for conducting psychophysical experiments in 2D was previously used in vibrotactile and alternative shape-changing systems [44,45]; it provides a universal approach for studying any 2-DOF haptic interface. Before the experiment, a training phase presents each pose once to the user, with an additional cursor providing a visual indicator of the correct pose. During the actual experiment, each pose (including the home pose itself) is presented three times, with a different pre-defined random order for each participant. This study design leads to 105 total poses per participant. The training and experiment take approximately 30 minutes combined. 3.2 Relative Motion Perception Experiment In the relative pose experiment, the workspace is divided into 20 discrete starting poses, as shown in Fig. 8 (right). Here, the vertical axis gives 2.5 mm divisions, and the horizontal axis gives 8.5 deg divisions. The ACM Trans. Comput.-Hum. Interact. coarser grid resolution (compared to the absolute study) is due to the more involved study method, leading to a higher number of trials and longer study time, as described below. During each trial, the S-BAN initially moves to one of the starting poses and a ready message is displayed on the computer screen. Once the user presses a button on the numeric pad, the S-BAN moves to another pose that is between zero and two pose steps away in each direction (e.g. 9➝17, 4 ➝14, 1➝3, 19➝20, 18➝18). The user then presses the numeric pad to select the relative motion that they believe the device completed (from the 25 options displayed in Fig. 8, middle). Note that this reporting approach means that the motions between poses 1➝11, 2➝12 and 10➝20 would all have the same relative motion (2 steps backwards). Between 9 and 20 relative motions are presented for each of the 20 starting poses, since some relative motions cannot be achieved, such as moving upwards or left from starting pose 1. The experiment consists of 266 motions in total. In an initial training phase, each relative motion was demonstrated once, with one additional relative motion to show relative motion equivalence for two starting poses. This procedure led to a total of 26 training poses that were distributed among the 20 starting poses (with some repetition). The combined relative motion training and experiment takes approximately 1 hour. 4 RESULTS A - PERCEPTUAL STUDY 10 participants (6 female, average age 30.4, standard deviation 4.94) took part in the perceptual study. These participants were divided into two equal groups who used either the Mid-Point or Leading-Notch kinematic mappings for both the absolute and relative study. The order of the study (absolute or relative first) was alternated between subsequent participants. The average outcomes of the absolute and relative studies are illustrated in Figures 9 and 10, respectively. ACM Trans. Comput.-Hum. Interact. Figure 9: Absolute pose perceptual study results showing estimation error for each pose, an interpolated version of the error distribution to highlight spatial error trends and a directional error quiver plot that shows the average directions and magnitudes of where the users believed the stimulus to be located. Mean errors are indicated on each color bar. ACM Trans. Comput.-Hum. Interact. Figure 10: Relative Motion Error illustrates user estimation error (left column), an interpolated version of this to highlight spatial error trends (middle column) and a plot of error direction (right column). Starting Pose Error shows the effect of starting pose on the relative motion error (left column), along with interpolated results (right column). 4.1 Absolute Study Quantitative Results Fig. 9 (left column) displays the user estimation error for each absolute pose (from the grid of discrete poses in Fig. 8, left). The units of the error are the number of steps between the poses, where one step corresponds to 1.67 mm / 5.67 deg. To make the spatial patterns of error shading easier to interpret, bi-cubic interpolation (resolution scale factor 30) was applied to the estimation error matrix (Fig. 9, central column). Finally, the rotation (X) and ACM Trans. Comput.-Hum. Interact. extension (Y) components of the error have been averaged for each pose to create a quiver plot (Fig. 9, right column), which shows the direction of pose estimation error, or rather, the averaged location of the estimated pose. The mean error of the Mid-Point (MP) kinematic mode is 1.36 steps (2.27 mm, 7.71 deg), while the mean error for the Leading-Notch (LN) is 1.52 steps (2.54 mm, 8.62 deg), indicating higher overall pose estimation accuracy with the MP method. The standard deviations of the absolute errors are 0.50 steps and 0.65 steps for MP and LP respectively. A paired t-test was performed on the absolute error values for MP and LN; the errors were paired by workspace location across the two methods. The t-test showed that the difference in absolute errors between methods was not significant (t(68) = –1.1292, p = 0.263), where significance is considered as p < 0.05. This result is not wholly surprising given that the mean error values are similar across the two cases. However, we do see some differences in the distribution of errors for each kinematic mode. We note that the lowest errors for the MP mode occurred along the cardinal directions (the central and horizontal axes), which have been indicated with dotted lines in the interpolated plots. The lowest overall error for the MP mode was at the home pose. For the LN mode, the horizontal axis (where extension = zero steps) is less clearly distinguished from other regions, and users seemed to have more difficulty discern ing when the device was at -1, 0 or +1 extension steps, particularly when rotation was ±3 steps. Indeed, the lowest error for the LN mode was at one step in front of the home position. The trend of error direction arrows to point towards a greater extension for LN further demonstrates a general uncertainty regarding device extension estimation for this kinematic mode. Conversely, the directional errors of MP tend to point more laterally towards the center line (rotation = 0 steps), indicating more lateral uncertainty. For both kinematic modes the greatest overall errors were at negative extension and full left/right rotation, though the errors at these poses were greater for LN than for MP. 4.2 Relative Study Quantitative Results The results of the relative motion study are presented in Fig. 10, which shows estimation error for each relative motion and starting position. The mean relative motion error for MP is 0.71 steps (1.78 mm, 6.04 deg) and for LN is a bit higher at 0.91 (2.28 mm, 7.74 deg) steps, where one step corresponds to 2.5 mm / 8.5 deg. A paired t-test between the elements of the relative error matrices shows that this difference is very close to statistically significant (t(X) = Y, p = 0.0503). The standard deviations of the relative results are 0.359 and 0.349 steps for MP and LN, respectively. For the MP kinematic mode, we once again observe that the lowest errors (of relative motion estimation) occur along the two cardinal axes. For the LN, however, this trend is limited to only the extension axis, implying uncertainty with rotation estimation. ACM Trans. Comput.-Hum. Interact. For the directional plots, for LN, the errors appear to generally point towards the center, implying underestimation of motion magnitude. The arrows are of smaller magnitude and less directional consistency for MP, though some symmetry is certainly observable. The starting pose error results show generally consistent results for the MP mode, with a slight reduction in error when the device becomes fully extended. For the LN mode, the greatest errors occur when the device begins at the home pose, one step behind the home pose, or the distal corners (when extension = 2 steps). The mean starting position error for MP is 0.67 and for LN is higher at 0.86. A paired t-test between the starting pose error matrices shows that this difference is statistically significant (t(X) = -5 Y, p = 5.92 ×10 ), indicating that the starting pose affects perception more for LN than MP. The standard deviations of the relative results are 0.124 and 0.145 steps for MP and LN, respectively. 4.3 Qualitative Results After each session, participants completed a Likert-scale questionnaire with space for comments. The results of the Likert scale are provided in Table 1. Table 1: Mean Likert-scale results for the Absolute Pose and Relative Motion perceptual studies. Absolute Pose Relative Motion QUESTION # std dev MP LN MP LN Using the device was confusing 1 2.25 2.25 3.00 2.80 0.33 I found the experiment physically tiring 2 2.20 1.80 3.40 2.40 0.59 I found the experiment mentally tiring 3 2.00 2.00 3.40 2.60 0.57 Left/right was easy to interpret 4 4.00 4.60 3.00 3.40 0.61 Forward/backward was easy to interpret 5 4.10 3.20 3.40 4.60 0.56 Combined instructions were easy to interpret 6 3.60 3.40 2.60 3.40 0.38 I enjoyed using the device 7 4.00 4.60 3.90 4.40 0.29 I found the device annoying 8 1.80 1.60 2.20 1.20 0.36 I felt I could trust the device 9 3.80 4.40 3.20 4.20 0.46 I felt like the instructions were precise 10 3.70 3.80 3.40 4.00 0.22 I would like to try being guided while walking 11 3.80 4.80 3.40 4.20 0.52 I feel like it could guide me in an urban situation 12 3.60 3.60 3.40 3.80 0.14 1 2 3 4 5 Strongly Disagree Neutral Strongly Agree Users appeared to find the LN kinematic mode marginally easier to interpret, which is contrary to the quantitative results presented above. For example, in the absolute study, the left/right directions were considered easier to interpret for LN than MP, though Fig. 9 indicates that the opposite was true. The reader is reminded that individuals completed the study with either LN or MP modes, so comparative opinions between MP and LN were not possible. The length of the study (which involved the ACM Trans. Comput.-Hum. Interact. presentation of 371 poses over 90 minutes) made it impractical for each person to test both modes in both absolute and relative modes. Indeed, the mental and physical fatigue reported by participants is likely to be due to this study length rather than specific device characteristics. Considering that both modes were implemented on the previously untested S-BAN device, it may be noted that average user opinions are positive, indicating that overall, participants found the S-BAN pleasant to use and would trust it for embodied guidance. The main comments from users addressed the fact that they could interpret general movement of pose area but struggled with precision. In the absolute pose study, one user commented: I could tell some information was in the upper left quadrant, but couldn't tell exactly if the device had moved more forward or more to the left. Also, The [difference between a] slight right and a large forward right is so metimes confusing and I could tell the overall direction most of the time but I could not tell how many steps to the left/right/up/down. Similar comments for the relative study also indicated that general pose was relatively easy to interpret, but the exact extension and rotation were more difficult to pinpoint. The amount of left / right information was a little difficult to interpret and I feel like I had some difficulty discriminating between directily [sic] 1 space L or R vs 1 space L/R combined with 1 space backwards. In addition, some users commented that the relative experiment required greater concentration to avoid accidentally reporting on absolute pose. For example: I had to try hard to remember the motion rather than answering based on the current configuration. Sometimes I felt that I forgot the motion if I didn't focus hard. One user summarized their experience of both studies by stating: I trust the device and feel like it knows precisely where it wants to guide me… however, I'm not sure if I'm precise enough to understand/catch the exact location. Another user concluded that with more training I can think of using such a device in an urban setting . 4.4 Perceptual Study Final Remarks The quantitative results have indicated that the Mid-Point kinematic mode enables users to identify both absolute pose and relative motion with a higher level of accuracy than the Leading-Notch mode. Considering the mean error of the two studies for MP, the absolute mean error is 1.36 steps, which equates to 2.27 mm / 7.71 deg, while the relative mean error is 0.71 steps, which equates to 1.78 mm / 6.04 deg. Therefore, we can consider that participants were better overall at perceiving relative motion of the device, though the low absolute error value of the home pose indicates that users should be able to recognize when they have reached a target. The distinction of kinematic modes is interesting in terms of device perception. The Mid-Point mode uses the overall motion of the end effector as the main communication option, while the Leading-Notch mode focuses more on the tactile cues of the notches. The better performance of the MP mode may be telling that shape perception is partially superior to tactile perception for the S-BAN, though clearly both have their role in this system. ACM Trans. Comput.-Hum. Interact. Given these final observations, the Mid-Point kinematic mode was selected for S-BAN use in the navigation study described in the following sections. 5 METHODS B – NAVIGATION STUDY While the perceptual study demonstrated that users are able to sufficiently identify the pose and motion of the S-BAN, a navigation study was implemented to confirm that the device could provide spatial information in an embodied guidance application. With this study, we also wished to observe how user performance with the haptic device compared to standard visual techniques of navigation, e.g. following an agent or using a handheld tool with visual instructions (which serves as a proxy for a smartphone). We also aimed to test the effectiveness of a combined tool that provided visual and haptic feedback (which may be considered as a virtual prototype of a shape-changing smartphone). In particular, we were interested in seeing how the various conditions affect user visual attention, given that this is a major concern for screen-based interfaces, as discussed in Section 1. Note that though the virtual reality environment allows presentation of visual navigation information directly in the users view v ia a heads - up display, we consider a smartphone proxy more relevant to current navigation trends due to the ubiquitous nature of smartphones and the currently low commercial success of pedestrian AR headset technology [59]. Furthermore, in [37], heads-up displays on Google Glass and smartphones were shown to be equally effective at providing navigation information. Finally, smartphone proxies enable us to evaluate the distraction concerns of smartphone screens that we previously highlighted in Section 1. 5.1 Virtual Reality Hardware When Spiers et al. conducted outdoor embodied navigation experiments with the Animotus, the authors commented that a high degree of user uncertainty and confusion was caused by inaccurate GPS readings (errors of 2-7m) and slow update rates [44]. These problems led to erroneous navigation cues since the user was sometimes several meters away from their GPS-reported position. The temporal and spatial variety of GPS localization errors led to inconsistent experiment conditions between users and trials. To avoid such problems, we chose to run our experiments in virtual reality, which also permitted full flexibility of additional experimental factors, such as all visual stimuli seen by users as well as the environment layout and size. We made use of an Oculus Quest system as our virtual reality interface. The Oculus Quest headset does not require external beacons for localization and instead uses cameras built into the headset for this task. The headset is also fully wireless, which allows unlimited user body rotation while in use. The headset continually detects the 6DOF pose of two handheld Oculus Touch controllers. The right-hand controller was attached to the top of the S-BAN to enable fast and accurate tracking of the haptic device in the VR environment (Fig. 11, left). Attachment was achieved via modification of the 3D-printed S-BAN handle top part (shown in Fig. 3) to securely interface with the ring of the Oculus Touch contr oller in a way that enables users to hold the S-BAN without interference. Each Oculus Touch controller has a mass of 170 g. This weight did not seem to increase user discomfort or fatigue when added to the S-BAN (160 g). While the S-BAN was held in the users right hand, the unmodified left Oculus controller was held in the users left hand. This controller was used for moving the users body in the virtual world. To ensure the users ACM Trans. Comput.-Hum. Interact. could not see the S-BAN through the gap at the bottom of the VR headset, a card gaze shield was attached to the front of the headset, blocking this view (Fig. 11, right). Figure 11: To enable accurate device tracking for virtual reality, the S-BAN handle was modified to mount an Oculus Quest Touch Controller in a way that would not affect user grasp. During the VR study, participants held the S-BAN in their right hand and an unmodified touch controller in their left hand for controlling their movement in the VR environment. 5.2 Virtual Environment and Navigation Task The navigation study involves the user being guided along an invisible path via four waypoints to reach a final destination. To make our study engaging for users, we framed this task as a treasure hunt game led by a dog, who could sniff out a buried bone at the final target. We consider the dog character to have connections to guide dogs, who are highly competent at providing guidance assistance to vision-impaired pedestrians. It should be noted that the main task of a real guide dog is to help a VI person avoid local obstacles and hazards, while the owner is responsible for global route determination (i.e. choosing the destination). Our dog character is different from actual guide dogs in that the virtual dog provides navigation to an unknown target in an environment with no obstacles or hazards. The users goal in the study is to follow the dog from the starting location along the unseen path via the four waypoints. This task is repeated four times for each guidance condition (Fig. 12). The four conditions are: ACM Trans. Comput.-Hum. Interact. Figure 12: The VR study setup (left image) and views of the VR environment under the navigation conditions. The visual agent is an animated dog that leads the user to the target (a buried bone). The dog is visible only in the natural vision condition, in which it is also attached to the users right hand by a leash. The vis ual and haptic conditions involve the respective navigation device either visually or haptically pointing at the invisible dog. The fourth condition involves the visual and haptic devices working simultaneously, which is graphically the same as the visual device. 1. Natural Vision – The dog is visible and connected to the user’s right hand via a flexible and extendable leash. The dog acts as an agent that the user follows. 2. Visual Device – The dog is invisible, and a visual device (a black rectangle attached to the user’s right hand) displays an arrow that indicates the dog’s location. 3. Haptic Device – The dog is invisible, and no device or leash is shown in the user’s right hand. The S-BAN is active and provides the user with unseen haptic cues to indicate the dog’s location. 4. Haptic and Visual Device – The dog is invisible, and the visual device is displayed in the user’s right hand. The arrow and the S-BAN provide the same information. In conditions 2, 3 and 4, a device is used to communicate the location of the dog. In all cases this information is considered in terms of direction (heading) and distance from the user to the dog. Heading is calculated and displayed relative to the current orientation of the hand-held device These two parameters are represented haptically by the angle and extension of the S-BAN relative to the home pose. For the visual tool, an arrow is displayed on the surface of the device to represent the same information. For equivalency between conditions, the arrow can rotate and extend to the same degree as the S-BAN, which is ±17 deg and ±5 mm (relative to a starting arrow length of 10 mm). The output of both the haptic tool and the visual tool update constantly based on the position and orientation of the users hand. For each trial, the dog begins the study by barking (providing an audio start cue) and running to the next waypoint, where it waits for the user. Once the user has arrived within a 5 m radius of the active waypoint, the dog sniffs and then runs toward the next waypoint. Once the dog and human find the final waypoint, the dog digs up the buried bone. The study begins with a general tutorial where users are first made familiar with moving in the virtual environment. Users are able to move their head to look around and change their orientation in the virtual world. Movement of the users body through the world is achieved with the left controller. Though participant movement was originally planned as typical continuous walking, this was found to induce ACM Trans. Comput.-Hum. Interact. motion sickness and/or fatigue in some users during pilot studies. Instead, we opted for a teleportation method, where the left controller thumbstick is used to point to a location on the ground (marked by a crosshair) and pulling the trigger button teleports the user to this location (Fig. 12). The maximum distance that a user can teleport is 5 m. During the tutorial, the user is also shown how the guidance devices respond to changes in dog position relative to the user: the (visible) dog walks away from the user in the forward-left, forward, and forward- right directions, and then the dog walks in a circle around the user while both the haptic and visual devices are activated. Following the tutorial, the study is arranged into four blocks, each corresponding to one guidance condition, with the order varied and counter-balanced between participants. Each block begins with an initial refresher training on the relevant modality f ollowed by a practice trial, where (for conditions 2-4) the dog can be made temporarily visible as the user traverses the same training path. For conditions 2, 3, and 4, the dog remains invisible for the remaining four trials of each block. For these trials, the same four 100 m target paths are used (Fig 13). Path order is randomized within each block and entire paths are randomly mirrored to increase variation. In addition, the user is teleported to a random position and orientation after each trial, with each path also translated and rotated to match. Using fixed paths instead of randomly generated paths ensures that some users or conditions do not experience more straight or spiraling paths than others, which could introduce bias in behavior. Each user completes 16 non-training trials in total. Their body location in the virtual environment and head pose are logged during these trials. Figure 13: The four sets of target waypoints, which are randomly mirrored in each trial to provide variation within the VR environment. Each resulting path is 100 m long and is shown relative to the users random starting pose. Each participant completes all four paths once for each navigation condition. The study takes approximately 90 minutes to complete, with 5-minute breaks enforced between guidance conditions. ACM Trans. Comput.-Hum. Interact. 6 RESULTS B – NAVIGATION STUDY The study was completed by 12 participants (7 female, average age 27.8), leading to a total of 192 trials. One additional participant (P3) was unable to complete the study due to a headset battery malfunction, so their data were discarded. Several metrics were used to analyze the resultant data, which are described in this section. 6.1 Motion Efficiency Analysis Figure 14 gives a sample of walking paths along route B for each guidance condition for participants 2 and 5. One may observe that there are greater variations between the walking paths for P2, with large diversions for the latter waypoints. Movement efficiency (ME) provides a metric for quantifying these diversions from the optimum path [44,46]. It is calculated as optimum path length divided by user path length. Motion efficiency for each user/trial is provided in the Appendix B, and these results are summarized in the boxplot of Fig. 15 (left). Figure 14: Example user motion paths for waypoint set B from participants 2 and 5. Motion paths are shown for all guidance conditions. The paths have been mirrored appropriately to allow comparison and are expressed relative to the users starting location in each trial. ACM Trans. Comput.-Hum. Interact. Figure 15: Boxplots showing (left) movement efficiency and (right) time taken to complete trials. Each trial involved a target path of the same 100 m length. Repeated measures ANOVA analysis was performed across device conditions (in MATLAB 2018a via the ranova command, where participant/path combinations are predictor variables and the response variable is motion efficiency for each device condition). The repeated measures ANOVA showed significant differences (F(3,108) = 4.1301, p = 0.008). Paired t-tests were used to compare the movement efficiency of participants for the different interface modalities. Due to the repeated nature of testing, a Bonferroni correction was used, which set the alpha value as 0.05/6 = 0.0083. Using this value, no pairwise comparisons produced significant results. This lack of significance implies that no modalities were significantly better or worse than others, when considering their impact on user movement efficiency. 6.2 Trial Time Analysis The time taken to complete each trial is presented in Fig. 15 (right). Here we can see that navigation with the haptic device takes longer than methods with a visual component. -11 ANOVA analysis indicated significant differences (F(3,108) = 21.569, p = 5.0719×10 ). Paired t-tests using a Bonferroni correction indicated significance for all comparisons, with natural vision being fastest. 6.3 Head Motion Analysis Though we do not have eye-tracking technology within the Oculus Quest VR headset, we can use the pose of the head in space to give insight into user visual attention. In particular, we are interested in whether users spent most of their time focused on the environment around them, or on a tool in their hands. Fig. 16 presents a scatter plot of all recorded head vertical angle and lateral error angle (relative to the location of the dog) at each point in time, for the four conditions. The mean head pose is highlighted as a white circle on each plot, and a dotted line is shown at -20 deg elevation as a reference. The distributions ACM Trans. Comput.-Hum. Interact. of the per-trial means and standard deviations of these measurements are also reflected in the boxplots of Fig. 17. Figure 16: Scatterplot of all user head poses for each of the four conditions. With the haptic interface, users spend less time looking down at the visual device in their right hand, and more time looking around the environment with their head elevated towards the horizon. ACM Trans. Comput.-Hum. Interact. Figure 17: Boxplots showing the mean (top row) and standard deviation (bottom row) of the horizontal (left) and vertical (right) head angle error from the target per trial for the four tested conditions. Mean horizontal head angle (top left) is comparable across conditions, though standard deviation of this angle (bottom left) trends higher for the haptic tool. The mean vertical head angle (top right) is more elevated for the two haptic tool conditions, even compared to natural vision. Standard deviation of head vertical angle is higher for natural vision than for the three tool-based conditions. For the visual tool, attention is focused much lower in the environment, in the region of the handheld device. In comparison, participants head motion is higher and more laterally distributed for the haptic tool, suggesting that their attention is on visual appreciation of the environment, though this trend could also be due to travelling in sub-optimal directions to reach targets. Interestingly the average visual attention of natural vision and the visual + haptic tool are similar. For the purposes of statistical analysis, we calculated the average horizontal and vertical head angle for each trial. Repeated measures ANOVA indicated that the differences in mean horizontal angle (F(3,30) = ACM Trans. Comput.-Hum. Interact. 1.736, p = 0.181) were not significant across conditions, while the differences in mean vertical angle (F(3,30 = 2.883, p = 0.052) trended toward significance. Given the potential for symmetry in head pose, we considered the standard deviation of each trial to also be a valuable metric, as this illustrates how head pose variance may differ between different conditions. Therefore, repeated measures ANOVA was also conducted on the per-trial standard deviation of horizontal (F(3,30) = 2.333, p = -4 0.094) and vertical (F(3,30 = 8.216, p = 3.87×10 ) head pose. In this case the differences in vertical head pose across conditions were highly significant. Independent (unpaired) t-tests were completed for post-hoc comparisons of vertical gaze. As in the motion efficiency analysis, the Bonferroni correction set the alpha value at 0.0083. For mean head pose, no horizontal comparisons were significant, but vertical comparisons showed significance between Natural -5 Vision and the Haptic Tool (t(22) = -4.752, p = 9.612×10 ) and between the Visual Tool and the Haptic Tool -5 (t(22) = -5.2108, p = 3.167×10 ). T-tests of standard deviation showed no significant differences for the horizontal comparisons. For vertical comparisons the pairwise differences between Natural Vision and Visual Tool (t(22) = -1.138, p = -4 -7 2.615×10 ), Natural Vision and Haptic Tool (t(22) = 6.7998, p = 7.8428×10 ) and Natural Vision and Visual -6 Tool (t(22) = 4.342, p = 2.4079×10 ) were all significant. A complete table of t-tests between all conditions is given in the appendix (Section 10.3). 6.4 Survey Results A Likert-scale survey was presented to each participant during the breaks between blocks of trials, where each block was associated with a guidance condition. The results of this survey are presented in Table 2. Questions that referred to device use were omitted for the natural vision condition, as there was no device. Table 2: Mean Likert-scale results for the navigation study. A response of 1 corresponds to Strongly Disagree while 5 corresponds to Strongly Agree. Visual Natural Visual Haptic QUESTION # + std dev Vision Tool Tool Haptic I understood what I was supposed to do 1 4.83 4.92 4.50 4.83 0.16 Using the device was confusing 2 1.42 1.50 1.33 0.07 I found the experiment easy 3 4.75 4.75 4.25 4.33 0.23 I found the experiment physically tiring 4 1.33 1.67 1.92 1.42 0.23 I found the experiment mentally tiring 5 1.17 1.58 2.00 1.42 0.30 I found the experiment boring 6 2.08 1.83 1.42 1.58 0.25 Left/right was easy to interpret 7 4.75 4.58 4.75 0.08 Forward/backward was easy to interpret 8 3.83 3.50 3.75 0.14 ACM Trans. Comput.-Hum. Interact. Combined instructions were easy to interpret 9 4.17 3.75 4.00 0.17 I enjoyed using the device 10 4.25 4.33 4.33 0.04 I found the device annoying 11 1.58 1.50 1.58 0.04 I felt I could trust the device 12 4.33 4.50 4.33 0.08 I felt like the instructions were precise 13 4.25 4.08 4.42 0.14 I would like to try being guided while walking 14 3.75 4.00 4.17 0.17 I feel like it could guide me in an urban situation 15 3.92 3.67 3.83 0.10 Once again, the opinions between the different modalities did not vary greatly for most of the questions, with all responses either agreeing or disagreeing with each statement positively. The standard deviation column highlights the spread of results for each of the questions. The highest standard deviations and therefore differences of opinion came from questions 3-6. One finding is that users found the experiment easiest with natural vision and the visual tool. Though users disagreed that any of the modalities were physically or mentally tiring, this disagreement was to a lesser degree with the haptic tool than with the other modalities. The experiment was considered least boring with the haptic device, and users enjoyed the haptic and haptic + vision tools marginally more than the visual tool. Users also stated they had marginally more trust of the haptic device. Users found the left/right commands easier to interpret than forward/backward commands for all conditions. The forward/backward commands were considered more difficult for the haptic tool. A preference was given to being guided by the visual + haptic tool, with least preference given to the visual tool, possibly due to the noted visual distraction from the environment. Contrary to this theory, the visual tool was most preferable for being guided in an urban situation, with the haptic device being least preferable. As in the perceptual study, participants were invited to leave comments on their experience of completing the study with the different modalities. Participants had few comments on the natural vision modality. One participant commented that they found this modality very easy because I could predict the destination for the dog & teleport almost simultaneously. The only minor challenge was when the dog was not in my field of view, I had to look around. ACM Trans. Comput.-Hum. Interact. For the visual tool, comments reflected the observations of head direction and attention from in the previous section, i.e. I found that I spent the whole trial just watching the arrow. I only looked up to see my surroundings once or twice. I missed out on the nice VR world. Another user described anxiety from being focused on the visual device: Visual cues are a better solution to be guided for a short time but they require me to always look at the device. This makes me stressful and difficult to be relaxed. Finally, I do not like that I had to look down on the arrows constantly. I never looked for my surroundings and only focused on the arrow. The haptic tool provided more mixed commentary. On the positive side we have the following statements: Depending solely on haptics made me less confused, compared to de pending both and It was easier to find the invisible dog with this device than just looking for the dog visually. Two users questioned the usefulness of the extension DOF, which relates to distance from the user to the dog: I found forward / backwards guide to be very hard to interpret and I found the fwd/bwd instructions confusing. I would have preferred having only the left/right information . These comments oppose the usefulness of distance representation, as determined in [45]. This difference of opinion could be due to individual opinion, the method of user movement (teleportation vs. standard walking) or the S-BAN being more difficult to interpret than the Animotus device that was used in [45]. One interesting user comment related to the presentation of information in the study: I'd prefer a less precise device that doesn't need me to think as I walk since I mostly care whether I should turn into the street on the left or right instead of knowing the exact angle . Here the user appears to be referring to the guidance given by Google Maps or in car GPS signals, where instructions are given on which turns to make on urban streets. In the open environment of our virtual world, such geographical constraints do not exist and so cannot be used for navigation. It is an interesting avenue of future investigation to consider how well the S-BAN would work in a scenario of discrete turn instructions rather than continuous position- correction updates. The final condition, of the combined visual and haptic tool, seemed to provide a variety of opinions, with some echoes of previous conditions. This included comments about visual attention: I noticed the green space and where I was teleporting much less since I was looking at my right hand all the time. and The arrow threw me off so I mainly just used the haptic device. There were further comments on the benefits of the forward/backwards motion: Only with this experiment I really understood the meaning of the backwards / forwards direction .... I think that left/right direction can do the job alone. and Forw ard backward movement of the device was noticed but not used to determine the dog's location . One subject decomposed the distance and rotation stimulus into the two available modalities: The combination of both methods was pretty easy to understand. Even though, for left/right movement, I focused more on the arrow than the haptics. For forwards/backwards, it was the other way around. ACM Trans. Comput.-Hum. Interact. 7 DISCUSSION The two presented studies have evaluated the S-BAN, a handheld device that utilizes a shape-changing body to communicate spatial guidance commands. Haptic shape-changing interfaces are currently rare in the HCI literature, but we consider this technology to have great unexplored potential with notable benefits over alternative modalities. This new shape-changing device has an ergonomic form factor that fits comfortably in a wide range of user hands. Its novel parallel actuation and continuous workspace enable it to represent a direction vector through a variety of kinematic mappings to the extension and rotation degrees of freedom. Neither of these features was present in previous 2-DOF shape-changing interfaces such as the Animotus [45,46], The results of the perceptual experiments showed that user sensitivity to the S-BANs shape is non- uniform over the device workspace. Different kinematic modes led to different regions of high and low absolute-device-pose and relative-device-motion errors. Device users found isolated extensions and rotations of the S-BAN easier to interpret than combined poses, suggesting that device-design improvements are still required to achieve more uniform salience across all poses of the workspace. There seemed to be greater spatial differences between the two kinematic approaches in the relative motion experiments than in the absolute pose experiment. Additionally, the lowest average error was observed in the relative motion experiment with the mid-point kinematic mode. These findings imply that designers of applications of shape-changing technologies must consider whether they want users to infer information from device shape or change in device shape. An example absolute pose use case is when we may expect a user to pause in their navigation (e.g., to talk to someone or wait for a gap in traffic). Here the device could hold a pose in a static sense, so that the next guidance command is ready when navigation resumes. For relative motion communication scenarios, we may be using the device to present rapidly updating guidance information, such as helping a VI user stay on a meandering footpath. Conducting virtual navigation experiments permitted us to study the S-BAN in embodied navigation but with increased control over experimental parameters (such as feedback modality) and less localization noise than with GPS-based outdoor experiments [44]. It is worth noting, however, that the selected virtual reality environment may not be a perfect substitute for embodied real-world navigation experiments, due to the artificial method of moving through the world using joystick controls rather than walking. Nevertheless, the study has provided valuable quantitative data that compare shape-changing haptic feedback to other navigation methods under identical conditions. Though the results (Fig 15) showed that the shape-changing haptic feedback led to lower motion efficiency and longer trial times, this was in comparison to visual modalities (natural vision or the smartphone proxy), which were already familiar to the participants, all of whom were sighted. The benefit of the haptic feedback was demonstrated in analysis of user head pose, which illustrated that more attention was spent on the environment than on the navigation device. This difference could be connected to safety in future studies by including hazards that the user must avoid while traveling to the target. The combination of visual and haptic tools provides an interesting use case of a smartphone enhanced by shape-changing cues, as described in Section 1.1. and Figure 3. ACM Trans. Comput.-Hum. Interact. The results of both studies show that the S-BAN has promise as a non-visual navigation tool but is likely to require more optimization, increased familiarization time and, most importantly, a comparison to other haptic feedback modalities (including vibrotactile) in future navigation studies. 8 CONCLUSION This work has presented and validated the S-BAN, a new shape-changing handheld haptic interface intended for representing spatial data with low attentional demands, compared to screen-based navigation tools and more common haptic interfaces. The S-BANs parallel kinematic structure provides it with a continuous shape-changing workspace in a compact form. The continuous workspace provides flexibility to represent data through various kinematic schemes, two of which have been proposed and detailed in this work. Little prior data exist on how humans are able to perceive dynamic shapes via touch, particularly from devices like the S-BAN. This knowledge gap led us to carry out a perceptual study that revealed that participants were better at perceiving relative motion of the shape-changing interface, as opposed to absolute pose. Furthermore, user sensitivity was highest for poses and motions along the cardinal directions from the home pose. The kinematic method that focused on overall shape of the S-BAN produced higher accuracy than the mode that focused on the tactile feeling of the notch features. User opinions highlighted that while general pose and motion of the device were easily perceived, specifics were harder to determine. To test the ability of the S-BAN to provide spatial guidance in an embodied scenario, and to compare this to visual guidance cues, we undertook a navigation study in a VR environment, which permitted greater reliability and flexibility than outdoor studies had demonstrated in similar prior work [44]. User motion efficiency was non-significant between any pair of conditions, implying equivalence between using the haptic device and a visual tool. However, significant differences were observed in the time taken to complete the trials between all four conditions (with the S-BAN leading to the slowest trials). This slower pace would perhaps be improved as users become more familiar with the device. Finally, we observed significant differences in user head-elevation during the navigation study, with users of the haptic device having their heads more elevated during the study. As user comments reflected, the haptic device enabled them to look at the world around them, rather than staring at a tool in their hand. We expect that such head posture would result in higher safety for sighted users and greater appreciation for the ambient environment around the user. The findings of this paper are beneficial for understanding the perception of shape-changing haptic systems and their potential for use in spatial navigation. Indeed, the results are encouraging and lead to the next logical steps of moving from the highly controlled VR environment to more realistic outdoor trials, subject to additional localization noise and environmental distractions and hazards. We plan to begin these trials with sighted participants, to allow comparison to visual guidance cues. We also aim to increase training times and improve the training method to see if these modifications lead to faster trial ACM Trans. Comput.-Hum. Interact. completion with haptic device. Our already observed strong reduction in visual tool focus with the haptic feedback suggests good suitability of the S-BAN device for vision-impaired users; this hypothesis will be explored in future work. Finally, we believe that the perceptual methods presented here will allow better evaluation of shape-changing or multi-dimensional haptic technologies in the future. We hope that our open-sourcing of the S-BAN hardware and code may lead to further improvements in the S-BANs form and kinematic modes. REFERENCES [1] Jason Alexander, Anne Roudaut, Jürgen Steimle, Kasper Hornbæk, Miguel Bruns Alonso, Sean Follmer, and Timothy Merritt. 2018. Grand Challenges in Shape-Changing Interface Research. Conf. Hum. Factors Comput. Syst. - Proc. 2018-April, (2018), 1–14. DOI:https://doi.org/10.1145/3173574.3173873 [2] Erica N. Barin, Cory M. McLaughlin, Mina W. Farag, Aaron R. Jensen, Jeffrey S. Upperman, and Helen Arbogast. 2018. Heads Up, Phones Down: A Pedestrian Safety Intervention on Distracted Crosswalk Behavior. J. Community Health 43, 4 (2018), 810–815. DOI:https://doi.org/10.1007/s10900- 018-0488-y [3] Elaine A Biddiss and Tom T Chau. 2007. Upper Limb Prosthesis Use and Abandonment: A Survey of the Last 25 Years. Prosthet. Orthot. Int. 31, 3 (September 2007), 236–57. DOI:https://doi.org/10.1080/03093640600994581 [4] Alberto Boem, Yuuki Enzaki, Hiroaki Yano, and Hiroo Iwata. 2019. Human perception of a haptic shape -changing interface with variable rigidity and size. 26th IEEE Conf. Virtual Real. 3D User Interfaces, VR 2019 - Proc. (2019), 858–859. DOI:https://doi.org/10.1109/VR.2019.8798214 [5] Johann Borenstein and Iwan Ulrich. 1997. The Guidecane - A Computerized Travel Aid. In Robotics and Automation. [6] N A Bradley and M D Dunlop. 2005. An Experimental Investigation into Wayfinding Directions for Visually Impaired People. Pers. Ubiquitous Comput. 9, 6 (2005), 395–403. DOI:https://doi.org/10.1007/s00779-005-0350-y [7] Gianni Campion, Qi Wang, and Vincent Hayward. 2005. The Pantograph Mk-II: A Haptic Instrument. 2005 IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS (2005), 723–728. DOI:https://doi.org/10.1109/IROS.2005.1545066 [8] Cesar Flores Cano and Anne Roudaut. 2019. MorphBenches: Using mixed reality experimentation platforms to study dynamic afford ances in shape-changing devices. Int. J. Hum. Comput. Stud. 132, July (2019), 1–11. DOI:https://doi.org/10.1016/j.ijhcs.2019.07.006 [9] Seungmoon Choi and Katherine J. Kuchenbecker. 2013. Vibrotactile Display: Perception, Technology, and Applications. Proceedings of the IEEE 101, 2093–2104. DOI:https://doi.org/10.1109/JPROC.2012.2221071 [10] Jean-Philippe Choiniere and Clement Gosselin. 2016. Development and Experimental Validation of a Haptic Compass based on Asymmetri c Torque Stimuli. Trans. Haptics 10, 1 (2016), 29–39. DOI:https://doi.org/10.1109/TOH.2016.2580144 [11] Roger W Cholewiak and Amy a Collins. 2003. Vibrotactile Localization on the Arm: Effects of Place, Space, and Age. Percept. Psychophys. 65, 7 (October 2003), 1058–77. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/14674633 [12] Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and H Ishii. 2013. inFORM: Dynamic Physical Affordances and Constraints throug h Shape and Object Actuation. Uist (2013), 417–426. DOI:https://doi.org/10.1145/2501988.2502032 [13] George A Gescheider. 1985. Psychophysics: Method, Theory and Application. . Lawrence Erlbaum Associates, Hillsdale, NJ, 37–60. [14] Brian Gleeson, Scott Horschel, and William Provancher. 2010. Design of a Fingertip -Mounted Tactile Display with Tangential Skin Displacement Feedback. IEEE Trans. Haptics 3, 4 (October 2010), 297–301. DOI:https://doi.org/10.1109/TOH.2010.8 [15] F Hemmert, S Hamann, and M Löwe. 2010. Take me by the hand: haptic compasses in mobile devices through shape change and weigh t shift. Proc. 6th Nord. Conf. Human-Computer Interact. (2010). Retrieved January 25, 2015 from http://dl.acm.org/citation.cfm?id=1869001 [16] Fabian Hemmert, Susann Hamann, Josefine Zeipelt, and Gesche Joost. 2010. Shape-Changing Mobiles : Tapering in Two-Dimensional Deformational Displays in Mobile Phones. CHI 2010 (2010), 3075–3079. [17] Sungjune Jang, Lawrence H. Kim, Kesler Tanner, Hiroshi Ishii, and Sean Follmer. 2016. Haptic Edge Display for Mobile Tactile Interaction. Conf. Hum. Factors Comput. Syst. - Proc. (2016), 3706–3716. DOI:https://doi.org/10.1145/2858036.2858264 [18] Tomoki Kamiyama, Mitsuhiko Karashima, and Hiromi Nishiguchi. 2019. Proposal of New Map Application for Distracted Walking Whe n Using Smartphone Map Application. In Advances in Intelligent Systems and Computing. DOI:https://doi.org/10.1007/978-3-319-96089-0_36 [19] Silke M Kärcher, Sandra Fenzlaff, Daniela Hartmann, Saskia K Nagel, and Peter König. 2012. Sensory Augmentation for the Blind . Front. Hum. Neurosci. 6, March (January 2012), 37. DOI:https://doi.org/10.3389/fnhum.2012.00037 [20] Sa Reum Kim, Dae Young Lee, Je Sung Koh, and Kyu Jin Cho. 2016. Fast, compact, and lightweight shape-shifting system composed of distributed self-folding origami modules. 2016-June, (2016), 4969–4974. DOI:https://doi.org/10.1109/ICRA.2016.7487704 [21] R L Klatzky and S J Lederman. 1995. Identifying Objects from a Haptic Glance. Percept. Psychophys. 57, 8 (1995), 1111–1123. DOI:https://doi.org/10.3758/BF03208368 [22] R L Klatzky, J M Loomis, S J Lederman, H Wake, and N Fujita. 1993. Haptic Identification of Objects and their Depictions. Percept. Psychophys. 54, 2 (1993), 170–178. DOI:https://doi.org/10.3758/BF03211752 [23] SJ Lederman. 2009. Haptic Perception: A Tutorial. Atten. Percept. Psychophys. 71, 7 (2009), 1439–1459. DOI:https://doi.org/10.3758/APP [24] Jose V.Salazar Luces, Kanako Ishida, and Yasuhisa Hirata. 2019. Human Position Guidance Using Vibrotactile Feedback Stimulati on Based on Phantom-Sensation. 2019 IEEE Int. Conf. Cyborg Bionic Syst. CBS 2019 (2019), 235–240. DOI:https://doi.org/10.1109/CBS46900.2019.9114479 [25] Thomas H Massie and J K Salisbury. 1994. The PHANTOM Haptic Interface: A Device for Probing Virtual. Proc. ASME Winter Annu. Meet. Symp. Haptic Interfaces Virtual Environ. Teleoperator Syst. (1994), 1–6. [26] John C Mcclelland, Johann Felipe, Gonzalez Avila, Robert J Teather, Pablo Figueroa, and Audrey Girouard. 2019. Adaptic : A Shape Changing Prop with Haptic Retargeting. July (2019), 2–3. [27] Miyuki Morioka and Michael J. Griffin. 2005. Thresholds for the Perception of Hand-Transmitted Vibration: Dependence on Contact Area and Contact Location. Somatosens. Mot. Res. 22, 4 (2005), 281–297. DOI:https://doi.org/10.1080/08990220500420400 [28] Saskia K Nagel, Christine Carl, Tobias Kringe, Robert Märtin, and Peter König. 2005. Beyond Sensory Substitution - Learning the Sixth Sense. J. Neural Eng. 2, 4 (December 2005), R13-26. DOI:https://doi.org/10.1088/1741-2560/2/4/R02 [29] Jack Nasar, Peter Hecht, and Richard Wener. 2008. Mobile Telephones, Distracted Attention, and Pedestrian Safety. Accid. Anal. Prev. 40, 1 (January 2008), 69–75. DOI:https://doi.org/10.1016/j.aap.2007.04.005 [30] Jack L Nasar and Derek Troyer. 2013. Pedestrian Injuries due to Mobile Phone use in Public Places. Accid. Anal. Prev. 57, (August 2013), 91–5. DOI:https://doi.org/10.1016/j.aap.2013.03.021 [31] J Farley Norman, Hideko F Norman, Anna Marie Clayton, Joann Lianekhammy, and Gina Zielke. 2004. The Visual and Haptic Percept ion of Natural Object Shape. Percept. Psychophys. 66, 2 (2004), 342–351. DOI:https://doi.org/10.3758/BF03194883 [32] Ian Oakley and Junseok Park. 2008. Did You Feel Something? Distracter Tasks and the Recognition of Vibrotactile Cues. Interact. Comput. 20, 3 (May 2008), 354–363. DOI:https://doi.org/10.1016/j.intcom.2007.11.003 ACM Trans. Comput.-Hum. Interact. [33] Claudio Pacchierotti, Student Member, Domenico Prattichizzo, and Senior Member. 2015. Displaying Sensed Tactile Cues with a Fingertip Haptic Device. c (2015). [34] T Louise-Bender Pape, J Kim, and B Weiner. 2002. The Shaping of Individual Meanings Assigned to Assistive Technology: a Review of Pers onal Factors. Disabil. Rehabil. 24, (2002), 5–20. [35] Martin Pielot, Benjamin Poppinga, Wilko Heuten, and Susanne Boll. 2011. A tactile compass for eyes -free pedestrian navigation. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 6947 LNCS, PART 2 (2011), 640–656. DOI:https://doi.org/10.1007/978-3-642-23771-3_47 [36] Majken K Rasmussen, Esben W Pedersen, Marianne G Petersen, and Kasper Hornbæk. 2012. Shape-Changing Interfaces : A Review of the Design Space and Open Research Questions. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (2012), 735–744. [37] Simon Robinson, Matt Jones, Parisa Eslambolchilar, Roderick Murray-Smith, and Mads Lindborg. 2010. I did it my way: Movin g away from the tyranny of turn-by-turn pedestrian navigation. ACM Int. Conf. Proceeding Ser. October (2010), 341–344. DOI:https://doi.org/10.1145/1851600.1851660 [38] Anne Roudaut, Rebecca Reed, Tianbo Hao, and Sriram Subramanian. 2014. Changibles : Analyzing and Designing Shape Changing Con structive Assembly. (2014), 2593–2596. [39] Shuyong Shao. 1985. On Mobility Aids for the Blind. DOI:https://doi.org/10.1177/1071181378022001143 [40] Jotaro Shigeyama, Takuji Narumi, Takeru Hashimoto, Tomohiro Tanikawa, Shigeo Yoshida, and Michitaka Hirose. 2019. Transcalibu r: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering based on Computational Perception Model. Conf. Hum. Factors Comput. Syst. - Proc. (2019), 1–11. DOI:https://doi.org/10.1145/3290605.3300241 [41] Kristen Shinohara and Jacob O. Wobbrock. 2011. In the Shadow of Misperception. Proc. 2011 Annu. Conf. Hum. factors Comput. Syst. - CHI ’11 (2011), 705. DOI:https://doi.org/10.1145/1978942.1979044 [42] Adam Spiers, Aaron Dollar, Janet Van Der Linden, and Maria Oshodi. 2015. First Validation of the Haptic Sandwich : A Shape Changing Handheld Haptic Navigation Aid. In International Conference on Advanced Robotics (ICAR), 1–9. DOI:https://doi.org/10.1109/ICAR.2015.7251447 [43] Adam J. Spiers and Aaron M. Dollar. 2016. Outdoor Pedestrian Navigation Assistance with a Shape-Changing Haptic Interface and Comparison with a Vibrotactile Device. In IEEE Haptics Symposium, HAPTICS, 34–40. DOI:https://doi.org/10.1109/HAPTICS.2016.7463152 [44] Adam J. Spiers and Aaron M. Dollar. 2017. Design and Evaluation of Shape-Changing Haptic Interfaces for Pedestrian Navigation Assistance. IEEE Trans. Haptics 10, 1 (2017), 17–28. DOI:https://doi.org/10.1109/TOH.2016.2582481 [45] Adam J. Spiers, Janet Van Der Linden, Sarah Wiseman, and Maria Oshodi. 2018. Testing a Shape-Changing Haptic Navigation Device with Vision- Impaired and Sighted Audiences in an Immersive Theater Setting. IEEE Trans. Human-Machine Syst. (2018). DOI:https://doi.org/10.1109/THMS.2018.2868466 [46] Andrew a Stanley, Adam M Genecov, and Allison M Okamura. 2015. Controllable Surface Haptics via Particle Jamming and Pneumatics. IEEE Trans. Haptics 8, 1 (2015), 13. DOI:https://doi.org/10.1109/TOH.2015.2391093 [47] Andrew A Stanley and Katherine J Kuchenbecker. 2012. Evaluation of Tactile Feedback Methods for Wrist Rotation Guidance. IEEE Trans. Haptics 5, 3 (2012), 240–251. DOI:https://doi.org/10.1109/TOH.2012.33 [48] David L Strayer, Frank a Drews, and Dennis J Crouch. 2006. A Comparison of the Cell Phone Driver and the Drunk Driver. Hum. Factors 48, 2 (2006), 381–391. DOI:https://doi.org/10.1518/001872006777724471 [49] Tami Toroyan. 2011. Mobile Phone Use: A Growing Problem of Driver Distraction. Technology (2011), 54p. DOI:https://doi.org/10.1146/annurev.ps.56.121004.100003 [50] Ramiro Velázquez. 2010. Wearable Assistive Devices for the Blind. Wearable Auton. Biomed. Devices Syst. Smart Environ. (2010), 331–349. [51] Richard Wagner, Jan-Hendrik Gosemann, Ina Sorge, Jochen Hubertus, Martin Lacher, and Steffi Mayer. 2019. Smartphone -Related Accidents in Children and Adolescents: A Novel Mechanism of Injury. Pediatr. Emerg. Care (2019). DOI:https://doi.org/10.1097/PEC.0000000000001781 [52] Julie M. Walker, Heather Culbertson, Michael Raitor, and Allison M. Okamura. 2018. Haptic Orientation Guidance Using Two Para llel Double- Gimbal Control Moment Gyroscopes. IEEE Trans. Haptics 11, 2 (2018), 267–278. DOI:https://doi.org/10.1109/TOH.2017.2713380 [53] Julie M Walker and Allison M Okamura. 2020. Continuous Closed-Loop 4-Degree-of-Freedom Holdable Haptic Guidance. 5, 4 (2020), 6853–6860. [54] Natasa Zatezalo, Mete Erdogan, and Robert Green. 2018. Road Traffic Injuries and Fatalities Among Drivers Distracted by Mobil e Devices. J. Emergencies, Trauma Shock (2018). DOI:https://doi.org/10.4103/JETS.JETS_24_18 [55] Andre Zenner and Antonio Kruger. 2017. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality. IEEE Trans. Vis. Comput. Graph. 23, 4 (2017), 1285–1294. DOI:https://doi.org/10.1109/TVCG.2017.2656978 [56] Andre Zenner and Antonio Kruger. 2019. Drag:on - A Virtual Reality Controller Providing Haptic Feedback Based on Drag and Weight Shift. CHI ’19 (2019). DOI:https://doi.org/10.1093/nq/s3-VII.180.466g [57] Ying Zheng and John B. Morrell. 2012. Haptic Actuator Design Parameters that Influence Affect and Attentio n. 2012 IEEE Haptics Symp. (March 2012), 463–470. DOI:https://doi.org/10.1109/HAPTIC.2012.6183832 9 APPENDIX 9.1 Device Kinematics Here we detail the formulation of the kinematic modes used for S-BAN control, via the notation presented in Fig. 18. ACM Trans. Comput.-Hum. Interact. Figure 18: Parallel kinematic structure of the S-BAN's actuation mechanism, with the annotation used for inverse kinematics calculations. 9.1.1 Mid-Point Inverse Kinematics For Mid-Point kinematics, the target angle and target extension ( ) relate to the mid-point between the tactile notches of the end effector. This control point is marked as MP in Fig. 18. First, we calculate the distance and angle between the left actuators tip ( P ) and MP in the frame of the end effector. These are constants. (1) T is the vertical (y) distance between the target extension of the mid-point ( ) and the left actuator tip (P ). T allows us to calculate the left target actuator extension ( ). L M (2) The end point of the right actuator (P ) will always be on an arc of radius L from the end-point of R EB Actuator L. This constraint enables determination of the right actuators extension ( ) using the y component of P . Note that this formulation neglects the slight rotation of Actuator R about its base for simplicity. , (3) ACM Trans. Comput.-Hum. Interact. 9.1.2 Leading-Notch Inverse Kinematics The Leading-Notch kinematic mode switches the control point between three points (NL – Notch Left, MP – The Mid-Point and NR – Notch Right) depending on the region of the workspace being explored, as labeled in Fig. 18. If the target angle is zero ( = 0), then the Mid-Point kinematic control from the previous section is used. If ( < 0) then the left notch (NL) is the control point, whereas if ( > 0) then the right notch (NR) is the control point. As in the Mid-Point case, we begin by calculating the distance and angle from P to the control point to determine the left actuator extension ( ). For the left notch target, the following applies. ( ) (6) The right-notch condition leads to the following equations, where is a line connecting P to NR with the angle ω . (7) In both cases, is determined using the equations in (3). 9.2 Movement Efficiency Results The movement efficiency of each user and trial in the VR navigation experiment (Sections 5 and 6) is illustrated in Fig 19. ACM Trans. Comput.-Hum. Interact. Figure 19: Movement efficiency for each trial by the 12 participants in the VR study. 9.3 Gaze T-Test Comparisons Table 3 details T-test comparisons related to user head motion, as discussed in Section 6.3 and illustrated in Fig. 17. Table 3: T-Test comparisons of user head pose for different device conditions. Significant (p<0.0083 after Bonferroni correction) values have been shaded. Vertical Horizontal Vertical Horizontal T-Test Comparison Mean Mean STD STD Natural Vision / Visual Tool 0.042 0.171 2.62E-04 0.308 Natural Vision / Haptic Tool 9.61E-05 0.267 7.84E-07 0.015 Natural Vision / Visual + Haptic Tool 0.590 0.603 2.41E-06 0.080 Visual Tool / Haptic Tool 3.17E-05 0.760 0.198 0.124 Visual Tool / Visual + Haptic Tool 0.052 0.262 0.389 0.562 Haptic Tool / Visual + Haptic Tool 0.024 0.362 0.484 0.215 ACM Trans. Comput.-Hum. Interact. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Computer-Human Interaction (TOCHI) Association for Computing Machinery

The S-BAN: Insights into the Perception of Shape-Changing Haptic Interfaces via Virtual Pedestrian Navigation

Loading next page...
 
/lp/association-for-computing-machinery/the-s-ban-insights-into-the-perception-of-shape-changing-haptic-N9f0Q3NP0m

References (66)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2023 Copyright held by the owner/author(s).
ISSN
1073-0516
eISSN
1557-7325
DOI
10.1145/3555046
Publisher site
See Article on Publisher Site

Abstract

The S-BAN: Insights into the Perception of Shape-Changing Haptic Interfaces via Virtual Pedestrian Navigation Short Title: Perception of Shape-Changing Haptic Interfaces ADAM J SPIERS Max Planck Institute for Intelligent Systems and Imperial College London, a.spiers@imperial.ac.uk ERIC YOUNG Max Planck Institute for Intelligent Systems, yoeric@is.mpg.de KATHERINE J KUCHENBECKER Max Planck Institute for Intelligent Systems, kjk@is.mpg.de Screen-based pedestrian navigation assistance can be distracting or inaccessible to users. Shape-changing haptic interfaces can overcome these concerns. The S-BAN is a new handheld haptic interface that utilizes a parallel kinematic structure to deliver 2-DOF spatial information over a continuous workspace, with a form factor suited to integration with other travel aids. The ability to pivot, extend and retract its body opens possibilities and questions around spatial data representation. We present a static study to understand user perception of absolute pose and relative motion for two spatial mappings, showing highest sensitivity to relative motions in the cardinal directions. We then present an embodied navigation experiment in virtual reality. User motion efficiency when guided by the S-BAN was statistically equivalent to using a vision-based tool (a smartphone proxy). Although haptic trials were slower than visual trials, participants heads were more elevated with the S -BAN, allowing greater visual focus on the environment. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI) → Interaction devices → Haptic devices • Hardware → Emerging technologies → Emerging interfaces • Human-centered computing → Ubiquitous and mobile computing → Empirical studies in ubiquitous and mobile computing Additional Keywords and Phrases: Haptics, Navigation, Shape-Changing Interfaces 1 INTRODUCTION Smartphones and GPS technology have revolutionized the way that people travel by vehicle and on foot. As pedestrians in the modern age, we can generally eschew paper maps for a multi-purpose pocket-sized device that can guide us to unfamiliar locations around the world. Though both revolutionary and beneficial, such navigation technology primarily interfaces with users through screens and audio cues, which have limitations. Although smartphone screens are capable of displaying information-rich maps annotated with suggested routes, numerous studies have shown these displays to be highly distracting to drivers and pedestrians [2,18,29,30,49,50]. Such distraction causes dangerous loss of attention that can lead to accidents and hospital admissions [30,52,55]. Furthermore, for individuals with vision impairments, screen-based interfaces are inaccessible. An obvious alternative has been to deliver navigation information through audio, which often requires the use of headphones in busy urban spaces. Unfortunately, such systems can diminish a vision-impaired (VI) users ability to perceive and appreciate their environment [5,19,51], while also limiting social interactions (a major factor in the abandonment of assistive technologies [3,34,42]). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2022 Copyright held by the owner/author(s). 1073-0516/2022/1-ART1 $15.00 http://dx.doi.org/10.1145/3555046 ACM Trans. Comput.-Hum. Interact. Furthermore, the obscuring of ambient sounds can have a detrimental effect on navigation and localization, as such sounds can highlight hazards and be used as spatial landmarks [6,19]. Touch is an alternative sensory modality that can be used to communicate navigation information to sighted, vision-impaired and deaf-blind individuals, enabling the potential for developing inclusive navigation aids that are accessible and useful to multiple demographics, rather than specialized assistive technologies for VI persons. Haptic feedback is particularly appealing for pedestrian navigation interfaces given the less critical role of the sense of touch during walking (compared to sight and hearing). Indeed, Spiers and Dollar recently highlighted that the most long-standing VI navigation aids (the guide cane and guide dog) are both haptic interfaces, providing mechano-tactile cues to the user via their grip on a handle or harness [45]. For many decades, researchers have considered the potential of haptic devices as navigation tools, with a focus on using vibration-based stimuli from eccentric rotating mass (ERM) actuators to indicate directions to walk or obstacles to avoid [19,24,35,38,40]. Though ERM vibration is a simple, compact and cost-effective method of delivering haptic cues, it too has limitations [9,48]. Oakley and Park pointed out that the attention-grabbing nature of vibrotactile cues has cemented their success in providing cell-phone alerts for events of high importance, such as an incoming phone call [32]. However, as anyone who has ever disabled their phones audio and vibration alerts due to an overly active chat group would know, such cues quickly become tiresome if the messages are in fact not of high importance. In the case of pedestrian navigation guidance, information is generally provided frequently over periods of tens of minutes. In these cases, attention-grabbing vibrotactile haptic cues can soon become irritating and distracting, as has been observed in several studies [28,44,48,58]. Vibration also typically cannot convey a direction on its own, requiring the use of multiple discrete actuators that must all touch the skin, and limiting the spatial resolution of the conveyed information. Obviously, vibration is not the only way that humans perceive touch. Spiers and Dollar previously argued that humans can adeptly perceive shape with their hands and that this haptic modality incurs relatively low cognitive load, given the subtle capabilities of shape perception demonstrated in daily life [45]. These properties make shape change a compelling interface solution for the task of providing navigation cues to users. This hypothesis was confirmed by testing navigation cues from equivalent shape- changing and vibrotactile handheld devices in an embodied navigation study [44]. In that work, the haptic shape-changing device was the A nimotus, a segmented cube with dedicated actuators for rotating and extending the upper half of its body relative to the bottom; body rotation was used to communicate direction cues, while body extension communicated distance cues. Shape-changing interfaces belong to a relatively young, yet diverse field of HCI, within which only a small subset of devices possess sufficient force capability to output haptic cues [1]. Many shape-changing interfaces and related research focus only on visual feedback, e.g. [8,20,36,39]. Of the subset with haptic output capability, many devices are desk-based or desk-sized, due to large actuator volumes that prevent portability [4,12,47]. Consequently, portable haptic-output, shape-changing devices (e.g. [15–17,26,45]) are ACM Trans. Comput.-Hum. Interact. sparse in the literature. The most similar comparisons to such systems are wearable or holdable portable devices that utilize other mechanotactile modalities to provide spatial cues, e.g. skin stretch, indentation, squeezing, dragging, asymmetric torque, the gyroscopic effect and weight shift [10,14,15,33,48,53,54]. Some other novel handheld devices utilize changes in center-of-mass, air-drag or weight distribution to generate passive dynamic haptic sensations, meaning that the user must move the device through space to sense the variation in properties [41,56,57]. These systems have been developed with the intention of making virtual reality (VR) controllers feel more like interactive objects in VR gaming scenarios. Example objects are swords, shields, crossbows and guns that the user holds and moves around. Note that such systems provide non-spatial, egocentric haptic information and so are not suited for navigation applications. In comparison to many of the above systems, users of shape-changing systems are able to feel the relative change of a system (as it transitions from one shape to another) in addition to the absolute shape of the system, irrespective of motion. The latter is particularly interesting as it enables a system to continue to convey information without applying any active stimulus to the user, which is not the case with vibration-based systems. This feature also means that a shape-changing interface may be re-grasped without loss of information: for example, users of the shape-changing Animotus device were able to release and re-grasp the interface to physically explore set-pieces as part of an immersive theatre experience [27]. We believe that the scarcity of haptic shape-changing interfaces is a result of 1) the relative difficulty of designing and fabricating these mechatronic systems (compared to, for example, outputting vibration via ERM motors) and 2) a lack of data on how such devices are perceived by users (again, compared to the extensive literature on vibration stimuli [23,27]). In this paper, we contribute to the field of haptic shape-changing interfaces and non-visual navigation guidance with a new device whose form factor and output capability outperform previously published designs. Furthermore, we characterize the devices properties via a perceptual study to understand how people interpret dynamic shape cues and a VR navigation study to accurately compare user performance when using shape-changing devices vs. visual modalities, including a smartphone proxy. 1.1 Device Design The new device (Figs. 1 & 2) is called the S-BAN (Shape-Based Assistance for Navigation). Rather than having dedicated actuators for each degree of freedom (DOF), as in [17,26,43], the S-BAN uniquely uses a parallel kinematic scheme to create a continuous two-dimensional workspace (Fig. 2) that is more analogous to desktop haptic interfaces such as the Pantograph MK-II [7] or the Phantom family of devices [25]. The continuous workspace of desktop devices allows flexibility in haptic rendering applications; the continuous workspace of the S-BAN is similarly intended to allow exploration of various spatial rendering options, two of which are tested here. The S-BAN is open source and easy to 3D-print and assemble (CAD file downloads and assembly instructions may be found at https://hi.is.mpg.de/research_projects/S-BAN and are also attached to this paper as supplemental materials). We therefore hope that others will use (and ACM Trans. Comput.-Hum. Interact. potentially modify) the platform to explore additional mappings that may be suited to other data representation (for example navigating data in abstract dimensions or playing video games). Furthermore, unlike previous systems, the S-BAN can render spatial cues behind the user due to its novel tactile notches (Fig. 1). The S-BANs parallel kinematic design allows compact, side-by-side actuator placement, leading to a slim and elongated form factor that may be held like a flashlight, a tried and tested ergonomic design suitable for extended periods of use. The flashlight holding posture negates the awkward arm pose necessary for use of the Animotus haptic device, which made some users self-conscious [44] and led to incorrect device grasps and arm fatigue [46]. ACM Trans. Comput.-Hum. Interact. Figure 1: The S-BAN is a 2DOF navigation device that can extend and pivot its end effector relative to its handle. The user is able to feel both the change in overall device shape and the relative alignment of notches on the sides of the device. Figure 2: The S-BAN in a users hand illustrating several poses (shapes) using the Mid-Point kinematic mode. These poses cover an extension change of ±5 mm and an angle change of ±17 deg. The design of the S-BAN combines pragmatic physical constraints and a conjecture on shape perception. The physical constraints were centered on implementing the desired 2-DOF end-effector mechanism in a handheld package with sufficient forces to move a users fingers across the given workspace. The conjecture aspect was that predictions were made on how the sensations generated by such a device would be perceived by users given sparse past literature. Indeed, we consider the S-BAN a ACM Trans. Comput.-Hum. Interact. prototype that occupies only a small region of the vast and largely unexplored design space of shape- changing haptic interfaces. This space covers factors such as perceptual quality, form factor and tactile aesthetics. The slim design and comfortable holding posture of the S-BAN are intended to enable future integration of the technology into existing travel aids, such as guide cane handles or smartphone cases (Fig. 3), where it may enhance such systems by providing haptic shape-changing feedback. Though guide canes also provide haptic feedback (by transmitting impacts, forces and vibrations), we do not believe that there will be interference with the shape-changing feedback of the S-BAN, due to the distinction in haptic modalities. Furthermore, past work has shown successful integration of other haptic modalities into guide cane devices without haptic sensation interference [2,13,17,22,55]. Smartphone integration is suggested primarily to avoid having to carry and interact with two separate devices (a smartphone and an S-BAN) but could also facilitate the use of visual cues (for sighted persons) or audio cues (for sighted or VI persons) to reinforce or supplement shape-based guidance. Figure 3: Conceptual illustrations of future integrations of the S-BAN concept with (left) a guide cane for vision- impaired users and (right) a smartphone case. Both devices use shape to haptically communicate spatial guidance commands without reliance on sight or sound. 1.2 Device Testing We provide thorough testing of the S-BAN in more detail than previously attempted with a portable shape-changing haptic interface. The typical psychophysical testing approaches used with many haptic interfaces become inapplicable when the interface has more than a single DOF [13]. Furthermore, though some experimental psychology literature exists on the haptic perception of shape by humans [21], these studies have not been extended to dynamic shapes, leading to further questions on how users will interpret shape-changing haptic stimuli. For example, it has not previously been determined if users are able to perceive dynamic shapes better in an absolute sense (i.e. identifying a pose) or a relative manner (i.e. identifying a change between poses). ACM Trans. Comput.-Hum. Interact. 1.2.1 Perceptual Study The above perceptual questions led us to perform absolute and relative static perceptual studies with the S- BAN, as reported in Sections 3 and 4. To showcase the flexibility of the S-BANs continuous workspace, these tests are both completed for two different kinematic mappings. The results show that shape perception does indeed depend on the employed mapping. They also indicate which mode is most effective for spatial information communication and inform our use of the device for pedestrian guidance. 1.2.2 Navigation Study Though Spiers and Dollar previously performed embodied guidance experiments to compare against non- visual vibrotactile systems [44], there has yet to be a comparison between shape-changing navigation interfaces and visual navigation solutions, which we achieve in this work (Sections 5 and 6). Visual feedback of spatial data via smartphones is a ubiquitous technology in pedestrian guidance. As such, we wish to test against this gold standard on our journey to creating non-visual guidance technology that will benefit both sighted and VI individuals. By utilizing VR for these studies, we avoid the accuracy issues of GPS + IMU localization system that adversely affected user experience in earlier outdoor [44] and indoor [46] navigation experiments. The VR setting also permits us to measure user attention to the handheld device and surroundings via headset pose measurements. In summary, we present the following contributions: 1. The S-BAN, a shape-changing handheld haptic interface that can produce pivoting and extending/retracting sensations across a continuous workspace, including behind the user. Our design enables the exploration of various kinematic representations of spatial information. Two example kinematic representations are presented in this work. 2. A perceptual study that measures how well users can perform absolute device pose (shape) and relative motion (between shapes) estimation tasks. The study evaluates the two example kinematic representations for both tasks and identifies the most favorable mapping. Areas of high/low sensitivity and user opinions of the device and study are also presented. 3. A navigation study conducted in virtual reality in which users are guided to targets with various types of visual and haptic shape-change feedback. User movement efficiency and task completion time provide measures of performance, while head pose informs us of visual attention focus. 2 MATERIALS Providing shape-changing feedback to a user requires an actuated physical device as well as a logical method for mapping navigation commands into device movements. 2.1 S-BAN Hardware The goal of the S-BAN is to aid human walking navigation by using touch to communicate movement instructions that will enable the user to reach navigational targets. For outdoor pedestrian-navigation applications, we envision these instructions will be generated by a smartphone application similar to ACM Trans. Comput.-Hum. Interact. Google Maps. We also propose that the S-BAN can be used to aid the navigation of virtual environments, as we show in an experimental scenario later in this paper. As determined in [25], providing both direction and distance to a navigational waypoint greatly improves the navigation performance of users over either of these components independently. We build upon this prior work with the development of a new 2DOF device that utilizes a parallel kinematic structure to allow high force generation, a more ergonomic body and a continuous workspace that also communicates backwards motions (which were not possible with the device of [25]). The S-BAN structure (Fig. 4) centers around two linear servo actuators (Actuonix L12-30-50-06-I) contained in the handle portion of the device. These actuators are grounded via dowel pins in the proximal part of the handle and distally connected together via the end-effector linkage, which in turn is connected to the end-effector portion of the device. As the linear actuators independently extend and retract, the end effector can simultaneously pivot left/right and extend/retract, relative to the handle. This internal linkage movement is then perceived by the user as the overall shape of the device extending or retracting and bending/pivoting to one side or the other. Figure 4: An exploded view of the S-BAN. Motion is achieved via two linear actuators arranged in a parallel configuration. Located on either side of the end effector and handle are recessed tactile notches (Fig. 1) that align when the device is at its home position (the center of the workspace). These notches were add ed to the S- BAN following initial pilot studies, where it was observed that though users could feel changes in device pose, they struggled to identify whether the device was in front of or behind the home position. ACM Trans. Comput.-Hum. Interact. The overall elongated shape of the S-BAN is inspired by a hand-held flashlight, a simple physical design that may be held without discomfort for extended periods of time while navigating. We consider this to be an improvement over the Animotus, whose cube-shape led to awkward holding poses [45]. Contained within the end effector is an 8×8 LED array (manufactured by Adafruit) which can provide illumination through the 1.5-mm-thick top plate (as shown in Fig. 4). Though we do not use the LED array in the studies presented in this paper, it is intended to allow future comparison of visual vs. haptic cues within the same device in physical (non-VR) navigation applications. Future models of the S-BAN will also be created without the LED array to allow for a more compact end effector; the optimum length will be determined in planned studies. Also included in the S-BAN handle is a 9DOF IMU that can be used as a tilt-compensated compass in situations where external orientation measurements are not available (e.g. outdoors, when using GPS). In this paper we use the built-in tracking of the Oculus Quest VR headset to measure the orientation of the S- BAN during the navigation study (Section 5). The handheld S-BAN measures 190×50×25 mm in its fully extended pose and has a mass of 160 grams. For the current prototype, the supporting electronics (including a Bluetooth module, Arduino Nano and LiPo battery) are contained in a tethered enclosure (110×70×35 mm, 210 g) that either rests on the desk for static studies or is carried in a small shoulder bag for mobile applications. Future plans include more compact custom-built electronics that may be integrated into the main device body. We also plan to investigate the possible inclusion of a small eccentric rotating mass motor, which may be used for providing short alerts for immediate and dangerous hazards in real-world navigation, such as when the user must stop and wait at a road crossing. The ERM motor could also signal when a final destination has been reached, as is common in smartphone or in-vehicle navigation systems. This concept of augmenting the low cognitive demands of shape-change sensations with the alerting nature of vibration was previously proposed by Spiers and Dollar [45]. 2.2 Kinematics Control Scheme Selection The S-BAN uses a planar parallel kinematic configuration, with two actuators connected to a single end effector. A somewhat comparable structure may be seen in the Pantograph MK-II [7], a desk-based haptic interface with two base-mounted rotary actuators that drive a planar linkage that terminates in a single point. In contrast, the S-BAN uses linear actuators, and its end effector is a rigid body that both translates and rotates (Fig. 3). This arrangement means that the S-BAN uses the coordinated motion of its linear actuators to simultaneously change the angle and extension of its end effector relative to its base (the handle). The exact mapping between spatial information and actuator extension depends on the part of the device body selected as the kinematic control point, the point from which the target angle ( ) and target extension ( ) are measured. Given a combination of and as control inputs, we use inverse kinematic calculations to determine the necessary extensions of the left ( ) and right ( ) linear actuators to achieve those targets relative to the selected control point. ACM Trans. Comput.-Hum. Interact. There are several options for the control point, such as the tip of the end effector or the mid-point between the actuators, each resulting in a unique kinematic scheme. The choice of scheme influences the haptic sensations generated by the device and leads to different kinematic constraints. Given that there are no prior haptic devices like the S-BAN, selecting an appropriate scheme is not obvious. While designing the device and running initial pilot studies, we identified two kinematic options as the ones most likely to be easily interpreted by users, each focusing on different aspects of how the S-BAN can communicate. The first of these, named Mid -Point , uses the mid-point between the notches of the end effector as the control point. In the other scheme, named Leading -Notch, the mid -point between the notches is the control point only for movements with no lateral deviation (i.e. forwards and backwards motions only). When the device turns to the left or right, then the left or right notch, respectively (i.e. the leading notch) , becomes the control point. As seen in Fig. 5, these two schemes yield quite different device poses for the same control inputs. While Mid-Point considers the motion of the end effector and overall device shape more generally, the Leading-Notch mode attempts to highlight the tactile sensations from the S-BANs notches (Fig. 1). Figure 5: Two kinematic modes are investigated in this work: Mid-Point and Leading-Notch, which are named after the part of the S-BAN end effector used as the control point. The methods are further described in Appendix A. The inverse kinematic derivation for each method is detailed in Appendix A. 2.3 Workspace Differences The choice of inverse kinematic scheme influences the range of control input pairs (target angle, , and target extension, ) that the device can display. As shown in Fig. 5, the same target angle and target extension typically lead to different final devices poses for each scheme. The difference is illustrated in Fig. 6, where we can observe that each inverse kinematic scheme enables the device to reach a different chevron-shaped set of control input pairs, with many control inputs reachable by only one of the schemes, e.g. ). ACM Trans. Comput.-Hum. Interact. Figure 6: The reachable workspace of the haptic device (the blue chevron) is influenced by the choice of inverse kinematic scheme; the black dotted lines mark equally sized rectangular regions within the reachable regions of each scheme. The center of each reachable workspace (indicated with a + symbol) corresponds to the h ome pose, where the angle and extension communicated to the user both equal zero. To fairly compare the kinematic schemes within the perceptual studies, we define equally sized regions of the reachable workspace from each kinematic scheme. As indicated on Fig. 6, these regions cover a rectangular region of ± 5 mm and ± 17 deg. The centers of these rectangular regions correspond to the location where the angle and extension perceived by the user should both equal zero for the given kinematic scheme. These are considered as home poses and have b een marked on Fig. 6 with + symbols. Note that the home pose has a different vertical offset for the two kinematic schemes. As the home pose is associated with alignment of the tactile notches on the S-BAN, two different handle parts were created, with notches in different locations. These handle parts were swapped depending on the kinematic mode being tested in the perceptual experiments, which will be described in the following section. 3 METHODS A - PERCEPTUAL STUDY While the perceptual characteristics of common haptic stimuli are well investigated (e.g. [11,27] give detailed accounts of vibrotactile perception), the perception of dynamic (changing) shapes has very rarely been studied. The lack of published investigations in this area stems from both the scarcity and the non-uniformity of systems that can provide haptic stimuli of this type. The most related data comes from the study of human identification of shape when grasping or touching static objects [21,22,31]. Note that the typical psychophysical approaches used to test an isolated haptic stimulus [13] do not apply to the S-BAN due to the coupled and co- dependent nature of its two DOFs. To understand how users perceive the dynamic shape stimulus of the S- BAN, we undertook two static perceptual studies in which participants remained stationary and seated. These studies were designed to understand: ACM Trans. Comput.-Hum. Interact. 1. Which kinematic mapping option (mid-point or leading-notch) provides a more accurate representation of control inputs (target angle and target extension). 2. Whether users are more precise at identifying absolute device pose or relative motion between poses. 3. Opinions on the usability of this shape-changing device (e.g. pleasantness, confusion). In both experiments, the user sits in front of a computer screen with the S-BAN held in their dominant hand. During training, the S-BAN is visible to the user, while in the actual study a cardboard box covers the users hand and device. A numeric keypad under the non -dominant hand acts as an input device (Fig. 7). Figure 7: Arrangement of the perceptual study. Participants held the S-BAN in their dominant hand and entered pose choices via a cursor controlled by a numeric pad. This image shows a training phase. In the actual experiment, the user's dominant hand and S-BAN are covered with an opaque box. In the absolute experiment we investigate how well participants can identify the static pose of the S- BAN after it has moved from the home pose. In the relative experiment, we investigate how well participants can identify the relative motion made by the S-BAN as it moves between two arbitrary poses. The effective rectangular region of the S-BAN (Fig. 6) covers ±5 mm extension and ±17 deg rotation from the home pose, where the notches of the device align. The ±5 mm extension workspace refers to navigational targets in front of (+) and behind (–) the user. Navigational targets behind the user are useful in cases when a new route is being provided or when a user walks past their target. Given that the main use case of the S-BAN will be when targets are in front of the user, we have focused the perceptual experiments mostly on this region, as reflected in the vertically asymmetric workspaces. This reduced workspace allows fine sampling of interesting regions without significant increases to experiment time (which can have a detrimental effect on user fatigue and concentration). ACM Trans. Comput.-Hum. Interact. 3.1 Absolute Pose Perception Experiment The device workspace (±5 mm and ±17 deg) is divided into 35 discrete poses for the absolute experiment, as illustrated in Fig. 8 (left), where the vertical axis refers to device extension (1.67 mm divisions) and the horizontal axis refers to device rotation (5.67 deg divisions). The number of poses (and therefore the size of the divisions) in both the absolute and relative studies was based on a trade-off between sampling resolution and experiment time. As mentioned above, longer perceptual experiments risk a reduction in user concentration and therefore result validity. This is particularly true as the absolute and relative experiments were completed in the same session, taking an average of 1.5 hours. Figure 8: User input interfaces for the two perceptual studies showing options of device pose (left) or relative motion between poses (middle). Starting poses for the relative motion experiment are shown on the right. The vertical axis of the poses corresponds to device extension, and the horizontal axis corresponds to rotation. The device begins each trial in the home pose and then moves to a random pose. The participant uses the numeric pad (labelled with arrows) to move a square cursor to what they believe to be the pose of the device on the chart in Fig. 8 (left). Note that we implement a linear grid (as opposed to a curved grid) for the chart as a generic representation of two independent variables conveyed by the device. This technique for conducting psychophysical experiments in 2D was previously used in vibrotactile and alternative shape-changing systems [44,45]; it provides a universal approach for studying any 2-DOF haptic interface. Before the experiment, a training phase presents each pose once to the user, with an additional cursor providing a visual indicator of the correct pose. During the actual experiment, each pose (including the home pose itself) is presented three times, with a different pre-defined random order for each participant. This study design leads to 105 total poses per participant. The training and experiment take approximately 30 minutes combined. 3.2 Relative Motion Perception Experiment In the relative pose experiment, the workspace is divided into 20 discrete starting poses, as shown in Fig. 8 (right). Here, the vertical axis gives 2.5 mm divisions, and the horizontal axis gives 8.5 deg divisions. The ACM Trans. Comput.-Hum. Interact. coarser grid resolution (compared to the absolute study) is due to the more involved study method, leading to a higher number of trials and longer study time, as described below. During each trial, the S-BAN initially moves to one of the starting poses and a ready message is displayed on the computer screen. Once the user presses a button on the numeric pad, the S-BAN moves to another pose that is between zero and two pose steps away in each direction (e.g. 9➝17, 4 ➝14, 1➝3, 19➝20, 18➝18). The user then presses the numeric pad to select the relative motion that they believe the device completed (from the 25 options displayed in Fig. 8, middle). Note that this reporting approach means that the motions between poses 1➝11, 2➝12 and 10➝20 would all have the same relative motion (2 steps backwards). Between 9 and 20 relative motions are presented for each of the 20 starting poses, since some relative motions cannot be achieved, such as moving upwards or left from starting pose 1. The experiment consists of 266 motions in total. In an initial training phase, each relative motion was demonstrated once, with one additional relative motion to show relative motion equivalence for two starting poses. This procedure led to a total of 26 training poses that were distributed among the 20 starting poses (with some repetition). The combined relative motion training and experiment takes approximately 1 hour. 4 RESULTS A - PERCEPTUAL STUDY 10 participants (6 female, average age 30.4, standard deviation 4.94) took part in the perceptual study. These participants were divided into two equal groups who used either the Mid-Point or Leading-Notch kinematic mappings for both the absolute and relative study. The order of the study (absolute or relative first) was alternated between subsequent participants. The average outcomes of the absolute and relative studies are illustrated in Figures 9 and 10, respectively. ACM Trans. Comput.-Hum. Interact. Figure 9: Absolute pose perceptual study results showing estimation error for each pose, an interpolated version of the error distribution to highlight spatial error trends and a directional error quiver plot that shows the average directions and magnitudes of where the users believed the stimulus to be located. Mean errors are indicated on each color bar. ACM Trans. Comput.-Hum. Interact. Figure 10: Relative Motion Error illustrates user estimation error (left column), an interpolated version of this to highlight spatial error trends (middle column) and a plot of error direction (right column). Starting Pose Error shows the effect of starting pose on the relative motion error (left column), along with interpolated results (right column). 4.1 Absolute Study Quantitative Results Fig. 9 (left column) displays the user estimation error for each absolute pose (from the grid of discrete poses in Fig. 8, left). The units of the error are the number of steps between the poses, where one step corresponds to 1.67 mm / 5.67 deg. To make the spatial patterns of error shading easier to interpret, bi-cubic interpolation (resolution scale factor 30) was applied to the estimation error matrix (Fig. 9, central column). Finally, the rotation (X) and ACM Trans. Comput.-Hum. Interact. extension (Y) components of the error have been averaged for each pose to create a quiver plot (Fig. 9, right column), which shows the direction of pose estimation error, or rather, the averaged location of the estimated pose. The mean error of the Mid-Point (MP) kinematic mode is 1.36 steps (2.27 mm, 7.71 deg), while the mean error for the Leading-Notch (LN) is 1.52 steps (2.54 mm, 8.62 deg), indicating higher overall pose estimation accuracy with the MP method. The standard deviations of the absolute errors are 0.50 steps and 0.65 steps for MP and LP respectively. A paired t-test was performed on the absolute error values for MP and LN; the errors were paired by workspace location across the two methods. The t-test showed that the difference in absolute errors between methods was not significant (t(68) = –1.1292, p = 0.263), where significance is considered as p < 0.05. This result is not wholly surprising given that the mean error values are similar across the two cases. However, we do see some differences in the distribution of errors for each kinematic mode. We note that the lowest errors for the MP mode occurred along the cardinal directions (the central and horizontal axes), which have been indicated with dotted lines in the interpolated plots. The lowest overall error for the MP mode was at the home pose. For the LN mode, the horizontal axis (where extension = zero steps) is less clearly distinguished from other regions, and users seemed to have more difficulty discern ing when the device was at -1, 0 or +1 extension steps, particularly when rotation was ±3 steps. Indeed, the lowest error for the LN mode was at one step in front of the home position. The trend of error direction arrows to point towards a greater extension for LN further demonstrates a general uncertainty regarding device extension estimation for this kinematic mode. Conversely, the directional errors of MP tend to point more laterally towards the center line (rotation = 0 steps), indicating more lateral uncertainty. For both kinematic modes the greatest overall errors were at negative extension and full left/right rotation, though the errors at these poses were greater for LN than for MP. 4.2 Relative Study Quantitative Results The results of the relative motion study are presented in Fig. 10, which shows estimation error for each relative motion and starting position. The mean relative motion error for MP is 0.71 steps (1.78 mm, 6.04 deg) and for LN is a bit higher at 0.91 (2.28 mm, 7.74 deg) steps, where one step corresponds to 2.5 mm / 8.5 deg. A paired t-test between the elements of the relative error matrices shows that this difference is very close to statistically significant (t(X) = Y, p = 0.0503). The standard deviations of the relative results are 0.359 and 0.349 steps for MP and LN, respectively. For the MP kinematic mode, we once again observe that the lowest errors (of relative motion estimation) occur along the two cardinal axes. For the LN, however, this trend is limited to only the extension axis, implying uncertainty with rotation estimation. ACM Trans. Comput.-Hum. Interact. For the directional plots, for LN, the errors appear to generally point towards the center, implying underestimation of motion magnitude. The arrows are of smaller magnitude and less directional consistency for MP, though some symmetry is certainly observable. The starting pose error results show generally consistent results for the MP mode, with a slight reduction in error when the device becomes fully extended. For the LN mode, the greatest errors occur when the device begins at the home pose, one step behind the home pose, or the distal corners (when extension = 2 steps). The mean starting position error for MP is 0.67 and for LN is higher at 0.86. A paired t-test between the starting pose error matrices shows that this difference is statistically significant (t(X) = -5 Y, p = 5.92 ×10 ), indicating that the starting pose affects perception more for LN than MP. The standard deviations of the relative results are 0.124 and 0.145 steps for MP and LN, respectively. 4.3 Qualitative Results After each session, participants completed a Likert-scale questionnaire with space for comments. The results of the Likert scale are provided in Table 1. Table 1: Mean Likert-scale results for the Absolute Pose and Relative Motion perceptual studies. Absolute Pose Relative Motion QUESTION # std dev MP LN MP LN Using the device was confusing 1 2.25 2.25 3.00 2.80 0.33 I found the experiment physically tiring 2 2.20 1.80 3.40 2.40 0.59 I found the experiment mentally tiring 3 2.00 2.00 3.40 2.60 0.57 Left/right was easy to interpret 4 4.00 4.60 3.00 3.40 0.61 Forward/backward was easy to interpret 5 4.10 3.20 3.40 4.60 0.56 Combined instructions were easy to interpret 6 3.60 3.40 2.60 3.40 0.38 I enjoyed using the device 7 4.00 4.60 3.90 4.40 0.29 I found the device annoying 8 1.80 1.60 2.20 1.20 0.36 I felt I could trust the device 9 3.80 4.40 3.20 4.20 0.46 I felt like the instructions were precise 10 3.70 3.80 3.40 4.00 0.22 I would like to try being guided while walking 11 3.80 4.80 3.40 4.20 0.52 I feel like it could guide me in an urban situation 12 3.60 3.60 3.40 3.80 0.14 1 2 3 4 5 Strongly Disagree Neutral Strongly Agree Users appeared to find the LN kinematic mode marginally easier to interpret, which is contrary to the quantitative results presented above. For example, in the absolute study, the left/right directions were considered easier to interpret for LN than MP, though Fig. 9 indicates that the opposite was true. The reader is reminded that individuals completed the study with either LN or MP modes, so comparative opinions between MP and LN were not possible. The length of the study (which involved the ACM Trans. Comput.-Hum. Interact. presentation of 371 poses over 90 minutes) made it impractical for each person to test both modes in both absolute and relative modes. Indeed, the mental and physical fatigue reported by participants is likely to be due to this study length rather than specific device characteristics. Considering that both modes were implemented on the previously untested S-BAN device, it may be noted that average user opinions are positive, indicating that overall, participants found the S-BAN pleasant to use and would trust it for embodied guidance. The main comments from users addressed the fact that they could interpret general movement of pose area but struggled with precision. In the absolute pose study, one user commented: I could tell some information was in the upper left quadrant, but couldn't tell exactly if the device had moved more forward or more to the left. Also, The [difference between a] slight right and a large forward right is so metimes confusing and I could tell the overall direction most of the time but I could not tell how many steps to the left/right/up/down. Similar comments for the relative study also indicated that general pose was relatively easy to interpret, but the exact extension and rotation were more difficult to pinpoint. The amount of left / right information was a little difficult to interpret and I feel like I had some difficulty discriminating between directily [sic] 1 space L or R vs 1 space L/R combined with 1 space backwards. In addition, some users commented that the relative experiment required greater concentration to avoid accidentally reporting on absolute pose. For example: I had to try hard to remember the motion rather than answering based on the current configuration. Sometimes I felt that I forgot the motion if I didn't focus hard. One user summarized their experience of both studies by stating: I trust the device and feel like it knows precisely where it wants to guide me… however, I'm not sure if I'm precise enough to understand/catch the exact location. Another user concluded that with more training I can think of using such a device in an urban setting . 4.4 Perceptual Study Final Remarks The quantitative results have indicated that the Mid-Point kinematic mode enables users to identify both absolute pose and relative motion with a higher level of accuracy than the Leading-Notch mode. Considering the mean error of the two studies for MP, the absolute mean error is 1.36 steps, which equates to 2.27 mm / 7.71 deg, while the relative mean error is 0.71 steps, which equates to 1.78 mm / 6.04 deg. Therefore, we can consider that participants were better overall at perceiving relative motion of the device, though the low absolute error value of the home pose indicates that users should be able to recognize when they have reached a target. The distinction of kinematic modes is interesting in terms of device perception. The Mid-Point mode uses the overall motion of the end effector as the main communication option, while the Leading-Notch mode focuses more on the tactile cues of the notches. The better performance of the MP mode may be telling that shape perception is partially superior to tactile perception for the S-BAN, though clearly both have their role in this system. ACM Trans. Comput.-Hum. Interact. Given these final observations, the Mid-Point kinematic mode was selected for S-BAN use in the navigation study described in the following sections. 5 METHODS B – NAVIGATION STUDY While the perceptual study demonstrated that users are able to sufficiently identify the pose and motion of the S-BAN, a navigation study was implemented to confirm that the device could provide spatial information in an embodied guidance application. With this study, we also wished to observe how user performance with the haptic device compared to standard visual techniques of navigation, e.g. following an agent or using a handheld tool with visual instructions (which serves as a proxy for a smartphone). We also aimed to test the effectiveness of a combined tool that provided visual and haptic feedback (which may be considered as a virtual prototype of a shape-changing smartphone). In particular, we were interested in seeing how the various conditions affect user visual attention, given that this is a major concern for screen-based interfaces, as discussed in Section 1. Note that though the virtual reality environment allows presentation of visual navigation information directly in the users view v ia a heads - up display, we consider a smartphone proxy more relevant to current navigation trends due to the ubiquitous nature of smartphones and the currently low commercial success of pedestrian AR headset technology [59]. Furthermore, in [37], heads-up displays on Google Glass and smartphones were shown to be equally effective at providing navigation information. Finally, smartphone proxies enable us to evaluate the distraction concerns of smartphone screens that we previously highlighted in Section 1. 5.1 Virtual Reality Hardware When Spiers et al. conducted outdoor embodied navigation experiments with the Animotus, the authors commented that a high degree of user uncertainty and confusion was caused by inaccurate GPS readings (errors of 2-7m) and slow update rates [44]. These problems led to erroneous navigation cues since the user was sometimes several meters away from their GPS-reported position. The temporal and spatial variety of GPS localization errors led to inconsistent experiment conditions between users and trials. To avoid such problems, we chose to run our experiments in virtual reality, which also permitted full flexibility of additional experimental factors, such as all visual stimuli seen by users as well as the environment layout and size. We made use of an Oculus Quest system as our virtual reality interface. The Oculus Quest headset does not require external beacons for localization and instead uses cameras built into the headset for this task. The headset is also fully wireless, which allows unlimited user body rotation while in use. The headset continually detects the 6DOF pose of two handheld Oculus Touch controllers. The right-hand controller was attached to the top of the S-BAN to enable fast and accurate tracking of the haptic device in the VR environment (Fig. 11, left). Attachment was achieved via modification of the 3D-printed S-BAN handle top part (shown in Fig. 3) to securely interface with the ring of the Oculus Touch contr oller in a way that enables users to hold the S-BAN without interference. Each Oculus Touch controller has a mass of 170 g. This weight did not seem to increase user discomfort or fatigue when added to the S-BAN (160 g). While the S-BAN was held in the users right hand, the unmodified left Oculus controller was held in the users left hand. This controller was used for moving the users body in the virtual world. To ensure the users ACM Trans. Comput.-Hum. Interact. could not see the S-BAN through the gap at the bottom of the VR headset, a card gaze shield was attached to the front of the headset, blocking this view (Fig. 11, right). Figure 11: To enable accurate device tracking for virtual reality, the S-BAN handle was modified to mount an Oculus Quest Touch Controller in a way that would not affect user grasp. During the VR study, participants held the S-BAN in their right hand and an unmodified touch controller in their left hand for controlling their movement in the VR environment. 5.2 Virtual Environment and Navigation Task The navigation study involves the user being guided along an invisible path via four waypoints to reach a final destination. To make our study engaging for users, we framed this task as a treasure hunt game led by a dog, who could sniff out a buried bone at the final target. We consider the dog character to have connections to guide dogs, who are highly competent at providing guidance assistance to vision-impaired pedestrians. It should be noted that the main task of a real guide dog is to help a VI person avoid local obstacles and hazards, while the owner is responsible for global route determination (i.e. choosing the destination). Our dog character is different from actual guide dogs in that the virtual dog provides navigation to an unknown target in an environment with no obstacles or hazards. The users goal in the study is to follow the dog from the starting location along the unseen path via the four waypoints. This task is repeated four times for each guidance condition (Fig. 12). The four conditions are: ACM Trans. Comput.-Hum. Interact. Figure 12: The VR study setup (left image) and views of the VR environment under the navigation conditions. The visual agent is an animated dog that leads the user to the target (a buried bone). The dog is visible only in the natural vision condition, in which it is also attached to the users right hand by a leash. The vis ual and haptic conditions involve the respective navigation device either visually or haptically pointing at the invisible dog. The fourth condition involves the visual and haptic devices working simultaneously, which is graphically the same as the visual device. 1. Natural Vision – The dog is visible and connected to the user’s right hand via a flexible and extendable leash. The dog acts as an agent that the user follows. 2. Visual Device – The dog is invisible, and a visual device (a black rectangle attached to the user’s right hand) displays an arrow that indicates the dog’s location. 3. Haptic Device – The dog is invisible, and no device or leash is shown in the user’s right hand. The S-BAN is active and provides the user with unseen haptic cues to indicate the dog’s location. 4. Haptic and Visual Device – The dog is invisible, and the visual device is displayed in the user’s right hand. The arrow and the S-BAN provide the same information. In conditions 2, 3 and 4, a device is used to communicate the location of the dog. In all cases this information is considered in terms of direction (heading) and distance from the user to the dog. Heading is calculated and displayed relative to the current orientation of the hand-held device These two parameters are represented haptically by the angle and extension of the S-BAN relative to the home pose. For the visual tool, an arrow is displayed on the surface of the device to represent the same information. For equivalency between conditions, the arrow can rotate and extend to the same degree as the S-BAN, which is ±17 deg and ±5 mm (relative to a starting arrow length of 10 mm). The output of both the haptic tool and the visual tool update constantly based on the position and orientation of the users hand. For each trial, the dog begins the study by barking (providing an audio start cue) and running to the next waypoint, where it waits for the user. Once the user has arrived within a 5 m radius of the active waypoint, the dog sniffs and then runs toward the next waypoint. Once the dog and human find the final waypoint, the dog digs up the buried bone. The study begins with a general tutorial where users are first made familiar with moving in the virtual environment. Users are able to move their head to look around and change their orientation in the virtual world. Movement of the users body through the world is achieved with the left controller. Though participant movement was originally planned as typical continuous walking, this was found to induce ACM Trans. Comput.-Hum. Interact. motion sickness and/or fatigue in some users during pilot studies. Instead, we opted for a teleportation method, where the left controller thumbstick is used to point to a location on the ground (marked by a crosshair) and pulling the trigger button teleports the user to this location (Fig. 12). The maximum distance that a user can teleport is 5 m. During the tutorial, the user is also shown how the guidance devices respond to changes in dog position relative to the user: the (visible) dog walks away from the user in the forward-left, forward, and forward- right directions, and then the dog walks in a circle around the user while both the haptic and visual devices are activated. Following the tutorial, the study is arranged into four blocks, each corresponding to one guidance condition, with the order varied and counter-balanced between participants. Each block begins with an initial refresher training on the relevant modality f ollowed by a practice trial, where (for conditions 2-4) the dog can be made temporarily visible as the user traverses the same training path. For conditions 2, 3, and 4, the dog remains invisible for the remaining four trials of each block. For these trials, the same four 100 m target paths are used (Fig 13). Path order is randomized within each block and entire paths are randomly mirrored to increase variation. In addition, the user is teleported to a random position and orientation after each trial, with each path also translated and rotated to match. Using fixed paths instead of randomly generated paths ensures that some users or conditions do not experience more straight or spiraling paths than others, which could introduce bias in behavior. Each user completes 16 non-training trials in total. Their body location in the virtual environment and head pose are logged during these trials. Figure 13: The four sets of target waypoints, which are randomly mirrored in each trial to provide variation within the VR environment. Each resulting path is 100 m long and is shown relative to the users random starting pose. Each participant completes all four paths once for each navigation condition. The study takes approximately 90 minutes to complete, with 5-minute breaks enforced between guidance conditions. ACM Trans. Comput.-Hum. Interact. 6 RESULTS B – NAVIGATION STUDY The study was completed by 12 participants (7 female, average age 27.8), leading to a total of 192 trials. One additional participant (P3) was unable to complete the study due to a headset battery malfunction, so their data were discarded. Several metrics were used to analyze the resultant data, which are described in this section. 6.1 Motion Efficiency Analysis Figure 14 gives a sample of walking paths along route B for each guidance condition for participants 2 and 5. One may observe that there are greater variations between the walking paths for P2, with large diversions for the latter waypoints. Movement efficiency (ME) provides a metric for quantifying these diversions from the optimum path [44,46]. It is calculated as optimum path length divided by user path length. Motion efficiency for each user/trial is provided in the Appendix B, and these results are summarized in the boxplot of Fig. 15 (left). Figure 14: Example user motion paths for waypoint set B from participants 2 and 5. Motion paths are shown for all guidance conditions. The paths have been mirrored appropriately to allow comparison and are expressed relative to the users starting location in each trial. ACM Trans. Comput.-Hum. Interact. Figure 15: Boxplots showing (left) movement efficiency and (right) time taken to complete trials. Each trial involved a target path of the same 100 m length. Repeated measures ANOVA analysis was performed across device conditions (in MATLAB 2018a via the ranova command, where participant/path combinations are predictor variables and the response variable is motion efficiency for each device condition). The repeated measures ANOVA showed significant differences (F(3,108) = 4.1301, p = 0.008). Paired t-tests were used to compare the movement efficiency of participants for the different interface modalities. Due to the repeated nature of testing, a Bonferroni correction was used, which set the alpha value as 0.05/6 = 0.0083. Using this value, no pairwise comparisons produced significant results. This lack of significance implies that no modalities were significantly better or worse than others, when considering their impact on user movement efficiency. 6.2 Trial Time Analysis The time taken to complete each trial is presented in Fig. 15 (right). Here we can see that navigation with the haptic device takes longer than methods with a visual component. -11 ANOVA analysis indicated significant differences (F(3,108) = 21.569, p = 5.0719×10 ). Paired t-tests using a Bonferroni correction indicated significance for all comparisons, with natural vision being fastest. 6.3 Head Motion Analysis Though we do not have eye-tracking technology within the Oculus Quest VR headset, we can use the pose of the head in space to give insight into user visual attention. In particular, we are interested in whether users spent most of their time focused on the environment around them, or on a tool in their hands. Fig. 16 presents a scatter plot of all recorded head vertical angle and lateral error angle (relative to the location of the dog) at each point in time, for the four conditions. The mean head pose is highlighted as a white circle on each plot, and a dotted line is shown at -20 deg elevation as a reference. The distributions ACM Trans. Comput.-Hum. Interact. of the per-trial means and standard deviations of these measurements are also reflected in the boxplots of Fig. 17. Figure 16: Scatterplot of all user head poses for each of the four conditions. With the haptic interface, users spend less time looking down at the visual device in their right hand, and more time looking around the environment with their head elevated towards the horizon. ACM Trans. Comput.-Hum. Interact. Figure 17: Boxplots showing the mean (top row) and standard deviation (bottom row) of the horizontal (left) and vertical (right) head angle error from the target per trial for the four tested conditions. Mean horizontal head angle (top left) is comparable across conditions, though standard deviation of this angle (bottom left) trends higher for the haptic tool. The mean vertical head angle (top right) is more elevated for the two haptic tool conditions, even compared to natural vision. Standard deviation of head vertical angle is higher for natural vision than for the three tool-based conditions. For the visual tool, attention is focused much lower in the environment, in the region of the handheld device. In comparison, participants head motion is higher and more laterally distributed for the haptic tool, suggesting that their attention is on visual appreciation of the environment, though this trend could also be due to travelling in sub-optimal directions to reach targets. Interestingly the average visual attention of natural vision and the visual + haptic tool are similar. For the purposes of statistical analysis, we calculated the average horizontal and vertical head angle for each trial. Repeated measures ANOVA indicated that the differences in mean horizontal angle (F(3,30) = ACM Trans. Comput.-Hum. Interact. 1.736, p = 0.181) were not significant across conditions, while the differences in mean vertical angle (F(3,30 = 2.883, p = 0.052) trended toward significance. Given the potential for symmetry in head pose, we considered the standard deviation of each trial to also be a valuable metric, as this illustrates how head pose variance may differ between different conditions. Therefore, repeated measures ANOVA was also conducted on the per-trial standard deviation of horizontal (F(3,30) = 2.333, p = -4 0.094) and vertical (F(3,30 = 8.216, p = 3.87×10 ) head pose. In this case the differences in vertical head pose across conditions were highly significant. Independent (unpaired) t-tests were completed for post-hoc comparisons of vertical gaze. As in the motion efficiency analysis, the Bonferroni correction set the alpha value at 0.0083. For mean head pose, no horizontal comparisons were significant, but vertical comparisons showed significance between Natural -5 Vision and the Haptic Tool (t(22) = -4.752, p = 9.612×10 ) and between the Visual Tool and the Haptic Tool -5 (t(22) = -5.2108, p = 3.167×10 ). T-tests of standard deviation showed no significant differences for the horizontal comparisons. For vertical comparisons the pairwise differences between Natural Vision and Visual Tool (t(22) = -1.138, p = -4 -7 2.615×10 ), Natural Vision and Haptic Tool (t(22) = 6.7998, p = 7.8428×10 ) and Natural Vision and Visual -6 Tool (t(22) = 4.342, p = 2.4079×10 ) were all significant. A complete table of t-tests between all conditions is given in the appendix (Section 10.3). 6.4 Survey Results A Likert-scale survey was presented to each participant during the breaks between blocks of trials, where each block was associated with a guidance condition. The results of this survey are presented in Table 2. Questions that referred to device use were omitted for the natural vision condition, as there was no device. Table 2: Mean Likert-scale results for the navigation study. A response of 1 corresponds to Strongly Disagree while 5 corresponds to Strongly Agree. Visual Natural Visual Haptic QUESTION # + std dev Vision Tool Tool Haptic I understood what I was supposed to do 1 4.83 4.92 4.50 4.83 0.16 Using the device was confusing 2 1.42 1.50 1.33 0.07 I found the experiment easy 3 4.75 4.75 4.25 4.33 0.23 I found the experiment physically tiring 4 1.33 1.67 1.92 1.42 0.23 I found the experiment mentally tiring 5 1.17 1.58 2.00 1.42 0.30 I found the experiment boring 6 2.08 1.83 1.42 1.58 0.25 Left/right was easy to interpret 7 4.75 4.58 4.75 0.08 Forward/backward was easy to interpret 8 3.83 3.50 3.75 0.14 ACM Trans. Comput.-Hum. Interact. Combined instructions were easy to interpret 9 4.17 3.75 4.00 0.17 I enjoyed using the device 10 4.25 4.33 4.33 0.04 I found the device annoying 11 1.58 1.50 1.58 0.04 I felt I could trust the device 12 4.33 4.50 4.33 0.08 I felt like the instructions were precise 13 4.25 4.08 4.42 0.14 I would like to try being guided while walking 14 3.75 4.00 4.17 0.17 I feel like it could guide me in an urban situation 15 3.92 3.67 3.83 0.10 Once again, the opinions between the different modalities did not vary greatly for most of the questions, with all responses either agreeing or disagreeing with each statement positively. The standard deviation column highlights the spread of results for each of the questions. The highest standard deviations and therefore differences of opinion came from questions 3-6. One finding is that users found the experiment easiest with natural vision and the visual tool. Though users disagreed that any of the modalities were physically or mentally tiring, this disagreement was to a lesser degree with the haptic tool than with the other modalities. The experiment was considered least boring with the haptic device, and users enjoyed the haptic and haptic + vision tools marginally more than the visual tool. Users also stated they had marginally more trust of the haptic device. Users found the left/right commands easier to interpret than forward/backward commands for all conditions. The forward/backward commands were considered more difficult for the haptic tool. A preference was given to being guided by the visual + haptic tool, with least preference given to the visual tool, possibly due to the noted visual distraction from the environment. Contrary to this theory, the visual tool was most preferable for being guided in an urban situation, with the haptic device being least preferable. As in the perceptual study, participants were invited to leave comments on their experience of completing the study with the different modalities. Participants had few comments on the natural vision modality. One participant commented that they found this modality very easy because I could predict the destination for the dog & teleport almost simultaneously. The only minor challenge was when the dog was not in my field of view, I had to look around. ACM Trans. Comput.-Hum. Interact. For the visual tool, comments reflected the observations of head direction and attention from in the previous section, i.e. I found that I spent the whole trial just watching the arrow. I only looked up to see my surroundings once or twice. I missed out on the nice VR world. Another user described anxiety from being focused on the visual device: Visual cues are a better solution to be guided for a short time but they require me to always look at the device. This makes me stressful and difficult to be relaxed. Finally, I do not like that I had to look down on the arrows constantly. I never looked for my surroundings and only focused on the arrow. The haptic tool provided more mixed commentary. On the positive side we have the following statements: Depending solely on haptics made me less confused, compared to de pending both and It was easier to find the invisible dog with this device than just looking for the dog visually. Two users questioned the usefulness of the extension DOF, which relates to distance from the user to the dog: I found forward / backwards guide to be very hard to interpret and I found the fwd/bwd instructions confusing. I would have preferred having only the left/right information . These comments oppose the usefulness of distance representation, as determined in [45]. This difference of opinion could be due to individual opinion, the method of user movement (teleportation vs. standard walking) or the S-BAN being more difficult to interpret than the Animotus device that was used in [45]. One interesting user comment related to the presentation of information in the study: I'd prefer a less precise device that doesn't need me to think as I walk since I mostly care whether I should turn into the street on the left or right instead of knowing the exact angle . Here the user appears to be referring to the guidance given by Google Maps or in car GPS signals, where instructions are given on which turns to make on urban streets. In the open environment of our virtual world, such geographical constraints do not exist and so cannot be used for navigation. It is an interesting avenue of future investigation to consider how well the S-BAN would work in a scenario of discrete turn instructions rather than continuous position- correction updates. The final condition, of the combined visual and haptic tool, seemed to provide a variety of opinions, with some echoes of previous conditions. This included comments about visual attention: I noticed the green space and where I was teleporting much less since I was looking at my right hand all the time. and The arrow threw me off so I mainly just used the haptic device. There were further comments on the benefits of the forward/backwards motion: Only with this experiment I really understood the meaning of the backwards / forwards direction .... I think that left/right direction can do the job alone. and Forw ard backward movement of the device was noticed but not used to determine the dog's location . One subject decomposed the distance and rotation stimulus into the two available modalities: The combination of both methods was pretty easy to understand. Even though, for left/right movement, I focused more on the arrow than the haptics. For forwards/backwards, it was the other way around. ACM Trans. Comput.-Hum. Interact. 7 DISCUSSION The two presented studies have evaluated the S-BAN, a handheld device that utilizes a shape-changing body to communicate spatial guidance commands. Haptic shape-changing interfaces are currently rare in the HCI literature, but we consider this technology to have great unexplored potential with notable benefits over alternative modalities. This new shape-changing device has an ergonomic form factor that fits comfortably in a wide range of user hands. Its novel parallel actuation and continuous workspace enable it to represent a direction vector through a variety of kinematic mappings to the extension and rotation degrees of freedom. Neither of these features was present in previous 2-DOF shape-changing interfaces such as the Animotus [45,46], The results of the perceptual experiments showed that user sensitivity to the S-BANs shape is non- uniform over the device workspace. Different kinematic modes led to different regions of high and low absolute-device-pose and relative-device-motion errors. Device users found isolated extensions and rotations of the S-BAN easier to interpret than combined poses, suggesting that device-design improvements are still required to achieve more uniform salience across all poses of the workspace. There seemed to be greater spatial differences between the two kinematic approaches in the relative motion experiments than in the absolute pose experiment. Additionally, the lowest average error was observed in the relative motion experiment with the mid-point kinematic mode. These findings imply that designers of applications of shape-changing technologies must consider whether they want users to infer information from device shape or change in device shape. An example absolute pose use case is when we may expect a user to pause in their navigation (e.g., to talk to someone or wait for a gap in traffic). Here the device could hold a pose in a static sense, so that the next guidance command is ready when navigation resumes. For relative motion communication scenarios, we may be using the device to present rapidly updating guidance information, such as helping a VI user stay on a meandering footpath. Conducting virtual navigation experiments permitted us to study the S-BAN in embodied navigation but with increased control over experimental parameters (such as feedback modality) and less localization noise than with GPS-based outdoor experiments [44]. It is worth noting, however, that the selected virtual reality environment may not be a perfect substitute for embodied real-world navigation experiments, due to the artificial method of moving through the world using joystick controls rather than walking. Nevertheless, the study has provided valuable quantitative data that compare shape-changing haptic feedback to other navigation methods under identical conditions. Though the results (Fig 15) showed that the shape-changing haptic feedback led to lower motion efficiency and longer trial times, this was in comparison to visual modalities (natural vision or the smartphone proxy), which were already familiar to the participants, all of whom were sighted. The benefit of the haptic feedback was demonstrated in analysis of user head pose, which illustrated that more attention was spent on the environment than on the navigation device. This difference could be connected to safety in future studies by including hazards that the user must avoid while traveling to the target. The combination of visual and haptic tools provides an interesting use case of a smartphone enhanced by shape-changing cues, as described in Section 1.1. and Figure 3. ACM Trans. Comput.-Hum. Interact. The results of both studies show that the S-BAN has promise as a non-visual navigation tool but is likely to require more optimization, increased familiarization time and, most importantly, a comparison to other haptic feedback modalities (including vibrotactile) in future navigation studies. 8 CONCLUSION This work has presented and validated the S-BAN, a new shape-changing handheld haptic interface intended for representing spatial data with low attentional demands, compared to screen-based navigation tools and more common haptic interfaces. The S-BANs parallel kinematic structure provides it with a continuous shape-changing workspace in a compact form. The continuous workspace provides flexibility to represent data through various kinematic schemes, two of which have been proposed and detailed in this work. Little prior data exist on how humans are able to perceive dynamic shapes via touch, particularly from devices like the S-BAN. This knowledge gap led us to carry out a perceptual study that revealed that participants were better at perceiving relative motion of the shape-changing interface, as opposed to absolute pose. Furthermore, user sensitivity was highest for poses and motions along the cardinal directions from the home pose. The kinematic method that focused on overall shape of the S-BAN produced higher accuracy than the mode that focused on the tactile feeling of the notch features. User opinions highlighted that while general pose and motion of the device were easily perceived, specifics were harder to determine. To test the ability of the S-BAN to provide spatial guidance in an embodied scenario, and to compare this to visual guidance cues, we undertook a navigation study in a VR environment, which permitted greater reliability and flexibility than outdoor studies had demonstrated in similar prior work [44]. User motion efficiency was non-significant between any pair of conditions, implying equivalence between using the haptic device and a visual tool. However, significant differences were observed in the time taken to complete the trials between all four conditions (with the S-BAN leading to the slowest trials). This slower pace would perhaps be improved as users become more familiar with the device. Finally, we observed significant differences in user head-elevation during the navigation study, with users of the haptic device having their heads more elevated during the study. As user comments reflected, the haptic device enabled them to look at the world around them, rather than staring at a tool in their hand. We expect that such head posture would result in higher safety for sighted users and greater appreciation for the ambient environment around the user. The findings of this paper are beneficial for understanding the perception of shape-changing haptic systems and their potential for use in spatial navigation. Indeed, the results are encouraging and lead to the next logical steps of moving from the highly controlled VR environment to more realistic outdoor trials, subject to additional localization noise and environmental distractions and hazards. We plan to begin these trials with sighted participants, to allow comparison to visual guidance cues. We also aim to increase training times and improve the training method to see if these modifications lead to faster trial ACM Trans. Comput.-Hum. Interact. completion with haptic device. Our already observed strong reduction in visual tool focus with the haptic feedback suggests good suitability of the S-BAN device for vision-impaired users; this hypothesis will be explored in future work. Finally, we believe that the perceptual methods presented here will allow better evaluation of shape-changing or multi-dimensional haptic technologies in the future. We hope that our open-sourcing of the S-BAN hardware and code may lead to further improvements in the S-BANs form and kinematic modes. REFERENCES [1] Jason Alexander, Anne Roudaut, Jürgen Steimle, Kasper Hornbæk, Miguel Bruns Alonso, Sean Follmer, and Timothy Merritt. 2018. Grand Challenges in Shape-Changing Interface Research. Conf. Hum. Factors Comput. Syst. - Proc. 2018-April, (2018), 1–14. DOI:https://doi.org/10.1145/3173574.3173873 [2] Erica N. Barin, Cory M. McLaughlin, Mina W. Farag, Aaron R. Jensen, Jeffrey S. Upperman, and Helen Arbogast. 2018. Heads Up, Phones Down: A Pedestrian Safety Intervention on Distracted Crosswalk Behavior. J. Community Health 43, 4 (2018), 810–815. DOI:https://doi.org/10.1007/s10900- 018-0488-y [3] Elaine A Biddiss and Tom T Chau. 2007. Upper Limb Prosthesis Use and Abandonment: A Survey of the Last 25 Years. Prosthet. Orthot. Int. 31, 3 (September 2007), 236–57. DOI:https://doi.org/10.1080/03093640600994581 [4] Alberto Boem, Yuuki Enzaki, Hiroaki Yano, and Hiroo Iwata. 2019. Human perception of a haptic shape -changing interface with variable rigidity and size. 26th IEEE Conf. Virtual Real. 3D User Interfaces, VR 2019 - Proc. (2019), 858–859. DOI:https://doi.org/10.1109/VR.2019.8798214 [5] Johann Borenstein and Iwan Ulrich. 1997. The Guidecane - A Computerized Travel Aid. In Robotics and Automation. [6] N A Bradley and M D Dunlop. 2005. An Experimental Investigation into Wayfinding Directions for Visually Impaired People. Pers. Ubiquitous Comput. 9, 6 (2005), 395–403. DOI:https://doi.org/10.1007/s00779-005-0350-y [7] Gianni Campion, Qi Wang, and Vincent Hayward. 2005. The Pantograph Mk-II: A Haptic Instrument. 2005 IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS (2005), 723–728. DOI:https://doi.org/10.1109/IROS.2005.1545066 [8] Cesar Flores Cano and Anne Roudaut. 2019. MorphBenches: Using mixed reality experimentation platforms to study dynamic afford ances in shape-changing devices. Int. J. Hum. Comput. Stud. 132, July (2019), 1–11. DOI:https://doi.org/10.1016/j.ijhcs.2019.07.006 [9] Seungmoon Choi and Katherine J. Kuchenbecker. 2013. Vibrotactile Display: Perception, Technology, and Applications. Proceedings of the IEEE 101, 2093–2104. DOI:https://doi.org/10.1109/JPROC.2012.2221071 [10] Jean-Philippe Choiniere and Clement Gosselin. 2016. Development and Experimental Validation of a Haptic Compass based on Asymmetri c Torque Stimuli. Trans. Haptics 10, 1 (2016), 29–39. DOI:https://doi.org/10.1109/TOH.2016.2580144 [11] Roger W Cholewiak and Amy a Collins. 2003. Vibrotactile Localization on the Arm: Effects of Place, Space, and Age. Percept. Psychophys. 65, 7 (October 2003), 1058–77. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/14674633 [12] Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and H Ishii. 2013. inFORM: Dynamic Physical Affordances and Constraints throug h Shape and Object Actuation. Uist (2013), 417–426. DOI:https://doi.org/10.1145/2501988.2502032 [13] George A Gescheider. 1985. Psychophysics: Method, Theory and Application. . Lawrence Erlbaum Associates, Hillsdale, NJ, 37–60. [14] Brian Gleeson, Scott Horschel, and William Provancher. 2010. Design of a Fingertip -Mounted Tactile Display with Tangential Skin Displacement Feedback. IEEE Trans. Haptics 3, 4 (October 2010), 297–301. DOI:https://doi.org/10.1109/TOH.2010.8 [15] F Hemmert, S Hamann, and M Löwe. 2010. Take me by the hand: haptic compasses in mobile devices through shape change and weigh t shift. Proc. 6th Nord. Conf. Human-Computer Interact. (2010). Retrieved January 25, 2015 from http://dl.acm.org/citation.cfm?id=1869001 [16] Fabian Hemmert, Susann Hamann, Josefine Zeipelt, and Gesche Joost. 2010. Shape-Changing Mobiles : Tapering in Two-Dimensional Deformational Displays in Mobile Phones. CHI 2010 (2010), 3075–3079. [17] Sungjune Jang, Lawrence H. Kim, Kesler Tanner, Hiroshi Ishii, and Sean Follmer. 2016. Haptic Edge Display for Mobile Tactile Interaction. Conf. Hum. Factors Comput. Syst. - Proc. (2016), 3706–3716. DOI:https://doi.org/10.1145/2858036.2858264 [18] Tomoki Kamiyama, Mitsuhiko Karashima, and Hiromi Nishiguchi. 2019. Proposal of New Map Application for Distracted Walking Whe n Using Smartphone Map Application. In Advances in Intelligent Systems and Computing. DOI:https://doi.org/10.1007/978-3-319-96089-0_36 [19] Silke M Kärcher, Sandra Fenzlaff, Daniela Hartmann, Saskia K Nagel, and Peter König. 2012. Sensory Augmentation for the Blind . Front. Hum. Neurosci. 6, March (January 2012), 37. DOI:https://doi.org/10.3389/fnhum.2012.00037 [20] Sa Reum Kim, Dae Young Lee, Je Sung Koh, and Kyu Jin Cho. 2016. Fast, compact, and lightweight shape-shifting system composed of distributed self-folding origami modules. 2016-June, (2016), 4969–4974. DOI:https://doi.org/10.1109/ICRA.2016.7487704 [21] R L Klatzky and S J Lederman. 1995. Identifying Objects from a Haptic Glance. Percept. Psychophys. 57, 8 (1995), 1111–1123. DOI:https://doi.org/10.3758/BF03208368 [22] R L Klatzky, J M Loomis, S J Lederman, H Wake, and N Fujita. 1993. Haptic Identification of Objects and their Depictions. Percept. Psychophys. 54, 2 (1993), 170–178. DOI:https://doi.org/10.3758/BF03211752 [23] SJ Lederman. 2009. Haptic Perception: A Tutorial. Atten. Percept. Psychophys. 71, 7 (2009), 1439–1459. DOI:https://doi.org/10.3758/APP [24] Jose V.Salazar Luces, Kanako Ishida, and Yasuhisa Hirata. 2019. Human Position Guidance Using Vibrotactile Feedback Stimulati on Based on Phantom-Sensation. 2019 IEEE Int. Conf. Cyborg Bionic Syst. CBS 2019 (2019), 235–240. DOI:https://doi.org/10.1109/CBS46900.2019.9114479 [25] Thomas H Massie and J K Salisbury. 1994. The PHANTOM Haptic Interface: A Device for Probing Virtual. Proc. ASME Winter Annu. Meet. Symp. Haptic Interfaces Virtual Environ. Teleoperator Syst. (1994), 1–6. [26] John C Mcclelland, Johann Felipe, Gonzalez Avila, Robert J Teather, Pablo Figueroa, and Audrey Girouard. 2019. Adaptic : A Shape Changing Prop with Haptic Retargeting. July (2019), 2–3. [27] Miyuki Morioka and Michael J. Griffin. 2005. Thresholds for the Perception of Hand-Transmitted Vibration: Dependence on Contact Area and Contact Location. Somatosens. Mot. Res. 22, 4 (2005), 281–297. DOI:https://doi.org/10.1080/08990220500420400 [28] Saskia K Nagel, Christine Carl, Tobias Kringe, Robert Märtin, and Peter König. 2005. Beyond Sensory Substitution - Learning the Sixth Sense. J. Neural Eng. 2, 4 (December 2005), R13-26. DOI:https://doi.org/10.1088/1741-2560/2/4/R02 [29] Jack Nasar, Peter Hecht, and Richard Wener. 2008. Mobile Telephones, Distracted Attention, and Pedestrian Safety. Accid. Anal. Prev. 40, 1 (January 2008), 69–75. DOI:https://doi.org/10.1016/j.aap.2007.04.005 [30] Jack L Nasar and Derek Troyer. 2013. Pedestrian Injuries due to Mobile Phone use in Public Places. Accid. Anal. Prev. 57, (August 2013), 91–5. DOI:https://doi.org/10.1016/j.aap.2013.03.021 [31] J Farley Norman, Hideko F Norman, Anna Marie Clayton, Joann Lianekhammy, and Gina Zielke. 2004. The Visual and Haptic Percept ion of Natural Object Shape. Percept. Psychophys. 66, 2 (2004), 342–351. DOI:https://doi.org/10.3758/BF03194883 [32] Ian Oakley and Junseok Park. 2008. Did You Feel Something? Distracter Tasks and the Recognition of Vibrotactile Cues. Interact. Comput. 20, 3 (May 2008), 354–363. DOI:https://doi.org/10.1016/j.intcom.2007.11.003 ACM Trans. Comput.-Hum. Interact. [33] Claudio Pacchierotti, Student Member, Domenico Prattichizzo, and Senior Member. 2015. Displaying Sensed Tactile Cues with a Fingertip Haptic Device. c (2015). [34] T Louise-Bender Pape, J Kim, and B Weiner. 2002. The Shaping of Individual Meanings Assigned to Assistive Technology: a Review of Pers onal Factors. Disabil. Rehabil. 24, (2002), 5–20. [35] Martin Pielot, Benjamin Poppinga, Wilko Heuten, and Susanne Boll. 2011. A tactile compass for eyes -free pedestrian navigation. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 6947 LNCS, PART 2 (2011), 640–656. DOI:https://doi.org/10.1007/978-3-642-23771-3_47 [36] Majken K Rasmussen, Esben W Pedersen, Marianne G Petersen, and Kasper Hornbæk. 2012. Shape-Changing Interfaces : A Review of the Design Space and Open Research Questions. Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (2012), 735–744. [37] Simon Robinson, Matt Jones, Parisa Eslambolchilar, Roderick Murray-Smith, and Mads Lindborg. 2010. I did it my way: Movin g away from the tyranny of turn-by-turn pedestrian navigation. ACM Int. Conf. Proceeding Ser. October (2010), 341–344. DOI:https://doi.org/10.1145/1851600.1851660 [38] Anne Roudaut, Rebecca Reed, Tianbo Hao, and Sriram Subramanian. 2014. Changibles : Analyzing and Designing Shape Changing Con structive Assembly. (2014), 2593–2596. [39] Shuyong Shao. 1985. On Mobility Aids for the Blind. DOI:https://doi.org/10.1177/1071181378022001143 [40] Jotaro Shigeyama, Takuji Narumi, Takeru Hashimoto, Tomohiro Tanikawa, Shigeo Yoshida, and Michitaka Hirose. 2019. Transcalibu r: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering based on Computational Perception Model. Conf. Hum. Factors Comput. Syst. - Proc. (2019), 1–11. DOI:https://doi.org/10.1145/3290605.3300241 [41] Kristen Shinohara and Jacob O. Wobbrock. 2011. In the Shadow of Misperception. Proc. 2011 Annu. Conf. Hum. factors Comput. Syst. - CHI ’11 (2011), 705. DOI:https://doi.org/10.1145/1978942.1979044 [42] Adam Spiers, Aaron Dollar, Janet Van Der Linden, and Maria Oshodi. 2015. First Validation of the Haptic Sandwich : A Shape Changing Handheld Haptic Navigation Aid. In International Conference on Advanced Robotics (ICAR), 1–9. DOI:https://doi.org/10.1109/ICAR.2015.7251447 [43] Adam J. Spiers and Aaron M. Dollar. 2016. Outdoor Pedestrian Navigation Assistance with a Shape-Changing Haptic Interface and Comparison with a Vibrotactile Device. In IEEE Haptics Symposium, HAPTICS, 34–40. DOI:https://doi.org/10.1109/HAPTICS.2016.7463152 [44] Adam J. Spiers and Aaron M. Dollar. 2017. Design and Evaluation of Shape-Changing Haptic Interfaces for Pedestrian Navigation Assistance. IEEE Trans. Haptics 10, 1 (2017), 17–28. DOI:https://doi.org/10.1109/TOH.2016.2582481 [45] Adam J. Spiers, Janet Van Der Linden, Sarah Wiseman, and Maria Oshodi. 2018. Testing a Shape-Changing Haptic Navigation Device with Vision- Impaired and Sighted Audiences in an Immersive Theater Setting. IEEE Trans. Human-Machine Syst. (2018). DOI:https://doi.org/10.1109/THMS.2018.2868466 [46] Andrew a Stanley, Adam M Genecov, and Allison M Okamura. 2015. Controllable Surface Haptics via Particle Jamming and Pneumatics. IEEE Trans. Haptics 8, 1 (2015), 13. DOI:https://doi.org/10.1109/TOH.2015.2391093 [47] Andrew A Stanley and Katherine J Kuchenbecker. 2012. Evaluation of Tactile Feedback Methods for Wrist Rotation Guidance. IEEE Trans. Haptics 5, 3 (2012), 240–251. DOI:https://doi.org/10.1109/TOH.2012.33 [48] David L Strayer, Frank a Drews, and Dennis J Crouch. 2006. A Comparison of the Cell Phone Driver and the Drunk Driver. Hum. Factors 48, 2 (2006), 381–391. DOI:https://doi.org/10.1518/001872006777724471 [49] Tami Toroyan. 2011. Mobile Phone Use: A Growing Problem of Driver Distraction. Technology (2011), 54p. DOI:https://doi.org/10.1146/annurev.ps.56.121004.100003 [50] Ramiro Velázquez. 2010. Wearable Assistive Devices for the Blind. Wearable Auton. Biomed. Devices Syst. Smart Environ. (2010), 331–349. [51] Richard Wagner, Jan-Hendrik Gosemann, Ina Sorge, Jochen Hubertus, Martin Lacher, and Steffi Mayer. 2019. Smartphone -Related Accidents in Children and Adolescents: A Novel Mechanism of Injury. Pediatr. Emerg. Care (2019). DOI:https://doi.org/10.1097/PEC.0000000000001781 [52] Julie M. Walker, Heather Culbertson, Michael Raitor, and Allison M. Okamura. 2018. Haptic Orientation Guidance Using Two Para llel Double- Gimbal Control Moment Gyroscopes. IEEE Trans. Haptics 11, 2 (2018), 267–278. DOI:https://doi.org/10.1109/TOH.2017.2713380 [53] Julie M Walker and Allison M Okamura. 2020. Continuous Closed-Loop 4-Degree-of-Freedom Holdable Haptic Guidance. 5, 4 (2020), 6853–6860. [54] Natasa Zatezalo, Mete Erdogan, and Robert Green. 2018. Road Traffic Injuries and Fatalities Among Drivers Distracted by Mobil e Devices. J. Emergencies, Trauma Shock (2018). DOI:https://doi.org/10.4103/JETS.JETS_24_18 [55] Andre Zenner and Antonio Kruger. 2017. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality. IEEE Trans. Vis. Comput. Graph. 23, 4 (2017), 1285–1294. DOI:https://doi.org/10.1109/TVCG.2017.2656978 [56] Andre Zenner and Antonio Kruger. 2019. Drag:on - A Virtual Reality Controller Providing Haptic Feedback Based on Drag and Weight Shift. CHI ’19 (2019). DOI:https://doi.org/10.1093/nq/s3-VII.180.466g [57] Ying Zheng and John B. Morrell. 2012. Haptic Actuator Design Parameters that Influence Affect and Attentio n. 2012 IEEE Haptics Symp. (March 2012), 463–470. DOI:https://doi.org/10.1109/HAPTIC.2012.6183832 9 APPENDIX 9.1 Device Kinematics Here we detail the formulation of the kinematic modes used for S-BAN control, via the notation presented in Fig. 18. ACM Trans. Comput.-Hum. Interact. Figure 18: Parallel kinematic structure of the S-BAN's actuation mechanism, with the annotation used for inverse kinematics calculations. 9.1.1 Mid-Point Inverse Kinematics For Mid-Point kinematics, the target angle and target extension ( ) relate to the mid-point between the tactile notches of the end effector. This control point is marked as MP in Fig. 18. First, we calculate the distance and angle between the left actuators tip ( P ) and MP in the frame of the end effector. These are constants. (1) T is the vertical (y) distance between the target extension of the mid-point ( ) and the left actuator tip (P ). T allows us to calculate the left target actuator extension ( ). L M (2) The end point of the right actuator (P ) will always be on an arc of radius L from the end-point of R EB Actuator L. This constraint enables determination of the right actuators extension ( ) using the y component of P . Note that this formulation neglects the slight rotation of Actuator R about its base for simplicity. , (3) ACM Trans. Comput.-Hum. Interact. 9.1.2 Leading-Notch Inverse Kinematics The Leading-Notch kinematic mode switches the control point between three points (NL – Notch Left, MP – The Mid-Point and NR – Notch Right) depending on the region of the workspace being explored, as labeled in Fig. 18. If the target angle is zero ( = 0), then the Mid-Point kinematic control from the previous section is used. If ( < 0) then the left notch (NL) is the control point, whereas if ( > 0) then the right notch (NR) is the control point. As in the Mid-Point case, we begin by calculating the distance and angle from P to the control point to determine the left actuator extension ( ). For the left notch target, the following applies. ( ) (6) The right-notch condition leads to the following equations, where is a line connecting P to NR with the angle ω . (7) In both cases, is determined using the equations in (3). 9.2 Movement Efficiency Results The movement efficiency of each user and trial in the VR navigation experiment (Sections 5 and 6) is illustrated in Fig 19. ACM Trans. Comput.-Hum. Interact. Figure 19: Movement efficiency for each trial by the 12 participants in the VR study. 9.3 Gaze T-Test Comparisons Table 3 details T-test comparisons related to user head motion, as discussed in Section 6.3 and illustrated in Fig. 17. Table 3: T-Test comparisons of user head pose for different device conditions. Significant (p<0.0083 after Bonferroni correction) values have been shaded. Vertical Horizontal Vertical Horizontal T-Test Comparison Mean Mean STD STD Natural Vision / Visual Tool 0.042 0.171 2.62E-04 0.308 Natural Vision / Haptic Tool 9.61E-05 0.267 7.84E-07 0.015 Natural Vision / Visual + Haptic Tool 0.590 0.603 2.41E-06 0.080 Visual Tool / Haptic Tool 3.17E-05 0.760 0.198 0.124 Visual Tool / Visual + Haptic Tool 0.052 0.262 0.389 0.562 Haptic Tool / Visual + Haptic Tool 0.024 0.362 0.484 0.215 ACM Trans. Comput.-Hum. Interact.

Journal

ACM Transactions on Computer-Human Interaction (TOCHI)Association for Computing Machinery

Published: Mar 18, 2023

Keywords: Haptics

There are no references for this article.