Loading...

Hello!

I'm Carmine Elvezio

I'm a PhD Student at the Computer Graphics and User Interfaces Lab at Columbia University, studying AR/VR/MR and 3D graphics, interactions and visualization techniques, under the supervision of Prof. Steven Feiner.

I develop 3D systems across several domains including medicine, remote maintenance, space, music, and rehabilitation, working with many technologies including Microsoft HoloLens, Oculus Rift, Unity, Unreal, and OpenGL. I've worked on projects sponsored by or affiliated with Microsoft, Google, Samsung, Canon, and NASA, amongst others! I've also contributed to a number of open source frameworks, including MercuryMessaging, GoblinXNA, and CGUI Utilities.

I've managed the CGUI Lab at Columbia with Prof. Feiner since 2013, where I've advised 150+ independent research projects, assisted in teaching the 4172 3D User Interfaces course, and participated in many multi-disciplinary university initiatives.





Education

2019-now
Columbia University

Ph.D. in Computer Science

2010-2012
Columbia University

M.Sc. in Computer Science

2006-2010
NYU - Tandon School of Engineering

B.S. in Computer Science

Experience

2013-2019
Columbia University

Research Staff Associate

2012
ARChemist

Server Engineer

2009-2010

Projects

A list of some of my recent projects and experimental systems. Most link to project pages when available, click on the headers to see the full list of collaborators, sponsors and additional material!

blog-img

With Katharina Krösl et al.

The first medically-informed, pilot-studied simulation of cataracts and other impairments in eye-tracked augmented reality (AR)

blog-img

With Jen-Shuo Liu et al.

Visualizing and interacting with social and municipal urban data in a collaborative AR/VR multi-user system.

blog-img

With Frank Ling et al.

Two collaborating users simultaneously manipulate physically modelled virtual objects in a remote rehabilitative game.

blog-img

With Shalva Kohen, Steven Feiner

A novel system that combines AR, smartphones, and tablets to allow performers to manage and annotate virtual sheet music.

blog-img

With Jen-Shuo Liu et al.

Showing how graphical precueing can be used to indicate trajectories + improve performance in a series of path-following tasks.

blog-img

With Sara Samuel et al.

Guiding dentistry students in learning to complete complex dental procedures, such as novacaine injection in AR.

blog-img

With Zichuan Wang et al.

An AR/VR system that presents notifications of real-world objects and points of interest when the user gets near.

blog-img

With Frank Ling et al.

A wide-area outdoor wearable AR system that uses RTK GNSS tracking to georegister maps from a SLAM tracking system

blog-img

With Mengu Sukan et al.

A framework and encompassing toolkit to facilitate nonspatial communication in the Unity game engine.

blog-img

With Benjamin Carlin et al.

An AR smartphone app that helps in diagnosing Executive Action Impairment in patients who have had traumatic brain injuries.

blog-img

With Katharina Krösl et al.

A parameterizable VR system allowing for the simulation of cataracts to help Opthamlogists and patients better understand symptoms.

blog-img

With Pierre Amelot et al.

An AR system combining HoloLens, LeapMotion, and touch screens to create a new way to search, organize and enjoy music.

blog-img

With Hiroshi Furuya et al.

Helping NASA astronauts to complete Stowage loading, unloading, and packing tasks on the ISS in AR.

blog-img

With Mengu Sukan et al.

A remote subject expert views a scene in VR, to create referential instructions for a technician using AR.

blog-img

With Changmin Seo et al.

Exploring AR theraputic options for Parkinsons patients who experience freezing of gait symptoms.

blog-img

With Samantha Siu et al.

A mobile AR application for editing, placing (in-situ), and sharing annotations created in the Making and Knowing project.

blog-img

With Mengu Sukan et al.

A light-weight toolkit for Unity facilitating the creating of modular 3D widgets for XR applications and user studies.

blog-img

With Shirin Sadri et al.

A hands-free AR guidance system for vascular interventions, using Microsoft HoloLens stereoscopic by head motion and speech.

blog-img

With Mengu Sukan et al.

Allowing users to pre-orient before teleporting in a Virtual Scene, using a World-In-Minature and preview.

blog-img

With Mengu Sukan et al.

Novel visualization approaches across different paradigms for guiding unconstrained manual 3DoF rotation.

blog-img

With Ohan Oda et al.

Experts create and use virtual replicas of physical objects to guide local users performing tasks in remotely.

blog-img

With Mengu Sukan et al.

Novel geometric construct representing a range of strategic view poses for optimal viewing.

blog-img

With Nicholas Dedual et al.

Visualizing an city, and associated data from Yelp, Foursquare, etc., in AR on a multi-touch tabletop display.

blog-img

With Jon Weisz et al.

UI combining the speed + convenience of preplanned grasps with versatility of an online planner.

blog-img

With Eszter Offertaler et al.

A tracking and control system allowing for direct manipulation of Sphero omni-directional robots, for AR registration.

blog-img

With Ohan Oda et al.

A Post-Hoc Analaysis tool allowing for users to visualize and interact with past user study simulations.

blog-img

With Mengu Sukan et al.

Integrating hand tracking information from LeapMotion, Kinect, and Primesense for unified, multifaceted tracking!

blog-img

With Ohan Oda et al.

Extending the GoblinXNA XR Framework to support new devices, visualizations, and User Study capabilities

blog-img

With Mengu Sukan et al.

A suite of tools to facilitate rapid AR/VR system prototyping, including math, user study, and geometry libraries.

blog-img

With Mengu Sukan et al.

A full-stack, bi-directional solution for streaming partially or fully-composited AR images over the network

Publications

List of my papers, demos, posters, and patents.

2020

CatARact: Simulating Cataracts in Augmented Reality

Katharina Krösl, Carmine Elvezio, Laura R. Luidolt, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael Wimmer
IEEE ISMAR 2020, Online (To Appear)
[PDF] [Slides] []
For our society to be more inclusive and accessible, the more than 2.2 billion people worldwide with limited vision should be considered more frequently in design decisions, such as architectural planning. To help architects in evaluating their designs and give medical personnel some insight on how patients experience cataracts, we worked with ophthalmologists to develop the first medically-informed, pilot-studied simulation of cataracts in eye-tracked augmented reality (AR). To test our methodology and simulation, we conducted a pilot study with cataract patients between surgeries of their two cataract-affected eyes. Participants compared the vision of their corrected eye, viewing through simulated cataracts, to that of their still affected eye, viewing an unmodified AR view. In addition, we conducted remote experiments via video call, live adjusting our simulation and comparing it to related work, with participants who had cataract surgery a few months before. We present our findings and insights from these experiments and outline avenues for future work.

Systems and methods for augmented reality guidance

Steven Feiner, Gabrielle Loeb, Alon Grinshpoon, Shirin Sadri, Carmine Elvezio
US Patent #16796645
[US Patent] []
Certain embodiments include a method for assisting a clinician in performing a medical procedure on a patient using augmented reality guidance. The method can include obtaining a three-dimensional model of an anatomic part of the patient. The method can also include aligning the three-dimensional model with data to form augmented reality guidance for the medical procedure. In addition, the method can include presenting the augmented reality guidance to the clinician during the medical procedure using an augmented reality three-dimensional display.

XREye: Simulating Visual Impairments in Eye-Tracked XR

Katharina Krösl, Carmine Elvezio, Matthias Hürbe, Sonja Karst, Steven Feiner, Michael Wimmer
IEEE VR 2020
[IEEE Xplore] []
Many people suffer from visual impairments, which can be difficult for patients to describe and others to visualize. To aid in understanding what people with visual impairments experience, we demonstrate a set of medically informed simulations in eye-tracked XR of several common conditions that affect visual perception: refractive errors (myopia, hyperopia, and presbyopia), cornea disease, and age-related macular degeneration (wet and dry).


2019

Manipulating 3D Anatomic Models in Augmented Reality: Comparing a Hands-Free Approach and a Manual Approach

Shirin Sadri, Shalva Kohen, Carmine Elvezio, Shawn Sun, Alon Grinshpoon, Gabrielle Loeb, Naomi Basu, Steven Feiner
IEEE ISMAR 2019
[IEEE Xplore] []
Many AR and VR task domains involve manipulating virtual objects; for example, to perform 3D geometric transformations. These operations are typically accomplished with tracked hands or hand-held controllers. However, there are some activities in which the user's hands are already busy with another task, requiring the user to temporarily stop what they are doing to perform the second task, while also taking time to disengage and reengage with the original task (e.g., putting down and picking up tools). To avoid the need to overload the user's hands this way in an AR system for guiding a physician performing a surgical procedure, we developed a hands-free approach to performing 3D transformations on patient-specific virtual organ models. Our approach uses small head motions to accomplish first-order and zero-order control, in conjunction with voice commands to establish the type of transformation. To show the effectiveness of this approach for translating, scaling, and rotating 3D virtual models, we conducted a within-subject study comparing the hands-free approach with one based on conventional manual techniques, both running on a Microsoft HoloLens and using the same voice commands to specify transformation type. Independent of any additional time to transition between tasks, users were significantly faster overall using the hands-free approach, significantly faster for hands-free translation and scaling, and faster (although not significantly) for hands-free rotation.

ICthroughVR: Illuminating cataracts through virtual reality

Katharina Krösl, Carmine Elvezio, Michael Wimmer, Matthias Hürbe, Steven Feiner, Sonja Karst
IEEE VR 2019
[IEEE Xplore] []
Vision impairments, such as cataracts, affect the way many people interact with their environment, yet are rarely considered by architects and lighting designers because of a lack of design tools. To address this, we present a method to simulate vision impairments, in particular cataracts, graphically in virtual reality (VR), using eye tracking for gaze-dependent effects. We also conduct a VR user study to investigate the effects of lighting on visual perception for users with cataracts. In contrast to existing approaches, which mostly provide only simplified simulations and are primarily targeted at educational or demonstrative purposes, we account for the user's vision and the hardware constraints of the VR headset. This makes it possible to calibrate our cataract simulation to the same level of degraded vision for all participants. Our study results show that we are able to calibrate the vision of all our participants to a similar level of impairment, that maximum recognition distances for escape route signs with simulated cataracts are significantly smaller than without, and that luminaires visible in the field of view are perceived as especially disturbing due to the glare effects they create. In addition, the results show that our realistic simulation increases the understanding of how people with cataracts see and could therefore also be informative for health care personnel or relatives of cataract patients.

A Hybrid RTK GNSS and SLAM Outdoor Augmented Reality System

Frank Fong Ling*, Carmine Elvezio*, Jacob Bullock, Steve Henderson, Steven Feiner
IEEE VR 2019
[IEEE Xplore] []
In the real world, we are surrounded by potentially important data. For example, military personnel and first responders may need to understand the layout of an environment, including the locations of designated assets, specified in latitude and longitude. However, many augmented reality (AR) systems cannot associate absolute geographic coordinates with the coordinate system in which they track. We describe a simple approach for developing a wide-area outdoor wearable AR system that uses RTK GNSS position tracking to align together and georegister multiple smaller maps from an existing SLAM tracking system.


2018

Augmented reality guidance for cerebral embolic protection (CEP) with the sentinel device during transcatheter aortic valve replacement (TAVR): First-in-human study

Shirin Sadri, Gabrielle Loeb, Alon Grinshpoon, Carmine Elvezio, Poonam Velagapudi, Vivian G Ng, Omar Khalique, Jeffrey W Moses, Robert J Sommer, Amisha J Patel, Isaac George, Rebecca T Hahn, Martin B Leon, Ajay J Kirtane, Tamim M Nazif, Susheel K Kodali, Steven K Feiner, Torsten P Vahl
Circulation 2019
[AHA Circulation] []
Augmented reality (AR) can improve transcatheter interventions by providing 3D visualization of patient anatomy, with potential to reduce contrast and procedure time. We present a novel AR guidance system that displays virtual, patient-specific 3D anatomic models and assess intraprocedural impact during TAVR with CEP in six patients.

A Comparative Ground Study of Prototype Augmented Reality Task Guidance for International Space Station Stowage Operations

Hiroshi Furuya, Lui Wang, Carmine Elvezio, Steven Feiner
International Astronautical Congress 2018
[Research Gate] []
Astronauts currently require extensive, near-instantaneous guidance and instruction by ground-based crew to efficiently and successfully conduct flight operations. As missions take astronauts farther away from earth and real-time communication between spacecraft and earthbound crew becomes impossible, astronauts will need technology that can help them execute flight operations with limited support. While research has shown that Augmented Reality (AR) can feasibly perform as an aid for completing certain flight tasks, there is little evidence that AR can assist in completing entire flight operations or improve flight performance metrics such as completion time. This work addresses stowage operations to investigate how AR can impact flight performance. During stowage operations, flight crew members transfer cargo items to and from different spacecraft modules. A recent stowage operation aboard the International Space Station (ISS) took 60 hours to complete with real-time ground crew support. The prolonged duration of stowage operations and the necessity for crewmembers to travel significant distances make it an appropriate domain for this investigation. StowageApp is a prototype AR application deployed on Microsoft HoloLens, and developed to assist astronauts in completing stowage operations. This paper describes the design of StowageApp and present the results of a user study comparing its performance to that of the current method of delivering stowage instructions on a handheld tablet device. This within-subject user study was performed in the ISS Node 2 Harmony, Japanese Experiment Module "Kibo," MultiPurpose Logistics Module "Leonardo," and Columbus mockups at National Aeronautics and Space Administration (NASA) Johnson Space Center in Houston, TX, USA. Each participant completed as many of a set of predetermined stowage tasks as they could in two hours in their assigned condition. Task completion time was measured, along with the number of errors attributed to the participant. Participants also completed an unweighted NASA TLX survey and provide their opinions in a free-form exit interview. Results did not reveal significant differences in task completion time, errors committed, or TLX responses between cargo message content conveyed via StowageApp and via electronic document on a tablet handheld device. However, user interviews showed that all but one participant would prefer to use StowageApp over the handheld device.

Hybrid UIs for Music Exploration in AR and VR

Carmine Elvezio, Pierre Amelot, Robert Boyle, Catherine Ilona Wes, Steven Feiner
IEEE ISMAR 2018
[IEEE Xplore] []
We present hybrid user interfaces that facilitate interaction with music content in 3D, using a combination of 2D and 3D input and display devices. Participants will explore an online music library, some wearing AR or VR head-worn displays used alone or in conjunction with touch screens, and others using only touch screens. They will select genres, artists, albums and songs, interacting through a combination of 3D hand-tracking and 2D multi-touch technologies.

Collaborative virtual reality for low-latency interaction

Carmine Elvezio, Frank Ling, Jen-Shuo Liu, Steven Feiner
IEEE UIST 2018
[ACM DL] []
In collaborative virtual environments, users must often perform tasks requiring coordinated action between multiple parties. Some cases are symmetric, in which users work together on equal footing, while others are asymmetric, in which one user may have more experience or capabilities than another (e.g., one may guide another in completing a task). We present a multi-user virtual reality system that supports interactions of both these types. Two collaborating users, whether co-located or remote, simultaneously manipulate the same virtual objects in a physics simulation, in tasks that require low latency networking to perform successfully. We are currently applying this approach to motor rehabilitation, in which a therapist and patient work together.

Hands-free augmented reality for vascular interventions

Alon Grinshpoon, Shirin Sadri, Gabrielle J Loeb, Carmine Elvezio, Samantha Siu, Steven K Feiner
ACM SIGGRAPH 2018
[ACM DL] []
During a vascular intervention (a type of minimally invasive surgical procedure), physicians maneuver catheters and wires through a patient's blood vessels to reach a desired location in the body. Since the relevant anatomy is typically not directly visible in these procedures, virtual reality and augmented reality systems have been developed to assist in 3D navigation. Because both of a physician's hands may already be occupied, we developed an augmented reality system supporting hands-free interaction techniques that use voice and head tracking to enable the physician to interact with 3D virtual content on a head-worn display while leaving both hands available intraoperatively. We demonstrate how a virtual 3D anatomical model can be rotated and scaled using small head rotations through first-order (rate) control, and can be rigidly coupled to the head for combined translation and rotation through zero-order control. This enables easy manipulation of a model while it stays close to the center of the physician's field of view.

Augmented reality task guidance for international space station stowage operations

Hiroshi Furuya, Lui Wang, Carmine Elvezio, Steven Feiner
ACM SIGGRAPH 2018
[ACM DL] []
Built at NASA Johnson Space Center (JSC) and Columbia University and tested in JSC's full-scale mockup of the International Space Station (ISS), StowageApp is a prototype for the future of conducting cargo operations in space. StowageApp dynamically guides astronauts as they complete stowage tasks, packing and unpacking cargo.

Collaborative exploration of urban data in virtual and augmented reality

Carmine Elvezio, Frank Ling, Jen-Shuo Liu, Barbara Tversky, Steven Feiner
ACM SIGGRAPH 2018
[ACM DL] []
From emergency planning to real estate, many domains can benefit from collaborative exploration of urban environments in VR and AR. We have created an interactive experience that allows multiple users to explore live datasets in context of an immersive scale model of the urban environment with which they are related.

Mercury: A messaging framework for modular ui components

Carmine Elvezio, Mengu Sukan, Steven Feiner
ACM CHI 2018
[ACM DL] [Project Repo] []
In recent years, the entity--component--system pattern has become a fundamental feature of the software architectures of game-development environments such as Unity and Unreal, which are used extensively in developing 3D user interfaces. In these systems, UI components typically respond to events, requiring programmers to write application-specific callback functions. In some cases, components are organized in a hierarchy that is used to propagate events among vertically connected components. When components need to communicate horizontally, programmers must connect those components manually and register/unregister events as needed. Moreover, events and callback signatures may be incompatible, making modular UIs cumbersome to build and share within or across applications. To address these problems, we introduce a messaging framework, Mercury, to facilitate communication among components. We provide an overview of Mercury, outline its underlying protocol and how it propagates messages to responders using relay nodes, describe a reference implementation in Unity, and present example systems built using Mercury to explain its advantages.

3: 54 PM Abstract No. 29 Augmented reality guidance for cerebral angiography

G Loeb, S Sadri, A Grinshpoon, J Carroll, C Cooper, C Elvezio, S Mutasa, G Mandigo, S Lavine, J Weintraub, A Einstein, S Feiner, P Meyers
Journal of Vascular and Interventional Radiology 2018
[Research Gate] []
Augmented reality (AR) holds great potential for IR byintegrating virtual 3D anatomic models into the real world.1In thispilot study, we developed an AR guidance system for cerebralangiography, evaluated its impact on radiation, contrast, and fluo-roscopy time, and assessed physician response.

Hands-free interaction for augmented reality in vascular interventions

Alon Grinshpoon, Shirin Sadri, Gabrielle J Loeb, Carmine Elvezio, Steven K Feiner
IEEE VR 2018
[IEEE Xplore] []
Vascular interventions are minimally invasive surgical procedures in which a physician navigates a catheter through a patient's vasculature to a desired destination in the patient's body. Since perception of relevant patient anatomy is limited in procedures of this sort, virtual reality and augmented reality systems have been developed to assist in 3D navigation. These systems often require user interaction, yet both of the physician's hands may already be busy performing the procedure. To address this need, we demonstrate hands-free interaction techniques that use voice and head tracking to allow the physician to interact with 3D virtual content on a head-worn display while making both hands available intraoperatively. Our approach supports rotation and scaling of 3D anatomical models that appear to reside in the surrounding environment through small head rotations using first-order control, and rigid body transformation of those models using zero-order control. This allows the physician to easily manipulate a model while it stays close to the center of their field of view.


2017

Remote collaboration in AR and VR using virtual replicas

Carmine Elvezio, Mengu Sukan, Ohan Oda, Steven Feiner, Barbara Tversky
ACM SIGGRAPH 2017
[ACM DL] []
In many complex tasks, a remote subject-matter expert may need to assist a local user, to guide their actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We demonstrate an approach that uses Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display (HWD). Our approach allows the remote expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. The remote expert can demonstrate actions in 3D by manipulating virtual replicas, supported by constraints and annotations, and point in 3D to portions of virtual replicas to annotate them.

Travel in large-scale head-worn VR: Pre-oriented teleportation with WIMs and previews

Carmine Elvezio, Mengu Sukan, Steven Feiner, Barbara Tversky
IEEE VR 2017
[IEEE Xplore] []
We demonstrate an interaction technique that allows a user to point at a world-in-miniature representation of a city-scale virtual environment and perform efficient and precise teleportation by pre-orienting an avatar. A preview of the post-teleport view of the full-scale virtual environment updates interactively as the user adjusts the position, yaw, and pitch of the avatar's head with a pair of 6DoF-tracked controllers. We describe design decisions and contrast with alternative approaches to virtual travel.


2016

Systems and methods for providing assistance for manipulating objects using virtual proxies and virtual replicas

Carmine Elvezio, Mengu Sukan, Ohan Oda, Steven Feiner, Barbara Tversky
US Patent #15146764
[US Patent] []
Systems and methods for providing assistance for manipulating objects using virtual proxies and virtual replicas are provided. Disclosed systems enable a remote or co-located subject matter expert (SME) to provide guidance to a local user on how to manipulate physical or virtual objects in the local user's environment. The local user views an augmented reality (AR) display of the local environment. The SME views a virtual reality (VR) or AR display of the local environment. The SME manipulates virtual proxies and virtual replicas to demonstrate to the local user how physical or virtual objects in the local environment should be manipulated. In other scenarios, a local user is provided instruction by using a display from which the local user can view the local environment. Manipulations of objects are tracked and the system provides the user feedback on whether the objects are properly oriented.

Providing assistance for orienting 3D objects using monocular eyewear

Mengu Sukan, Carmine Elvezio, Steven Feiner, Barbara Tversky
ACM SUI 2016
[ACM DL] []
Many tasks require that a user rotate an object to match a specific orientation in an external coordinate system. This includes tasks in which one object must be oriented relative to a second prior to assembly and tasks in which objects must be held in specific ways to inspect them. Research has investigated guidance mechanisms for some 6DOF tasks, using wide--field-of-view, stereoscopic virtual and augmented reality head-worn displays (HWDs). However, there has been relatively little work directed toward smaller field-of-view lightweight monoscopic HWDs, such as Google Glass, which may remain more comfortable and less intrusive than stereoscopic HWDs in the near future. We have designed and implemented a novel visualization approach and three additional visualizations representing different paradigms for guiding unconstrained manual 3DOF rotation, targeting these monoscopic HWDs. We describe our exploration of these paradigms and present the results of a user study evaluating the relative performance of the visualizations and showing the advantages of our new approach.

A framework to facilitate reusable, modular widget design for real-time interactive systems

Carmine Elvezio, Mengu Sukan, Steven Feiner
IEEE SEARIS 2016
[IEEE Xplore] []
Game engines have become popular development platforms for real-time interactive systems. Contemporary game engines, such as Unity and Unreal, feature component-based architectures, in which an object's appearance and behavior is determined by a collection of component scripts added to that object. This design pattern allows common functionality to be contained within component scripts and shared among different types of objects. In this paper, we describe a flexible framework that enables programmers to design modular, reusable widgets for real-time interactive systems using a collection of component scripts. We provide a reference implementation written in C# for the Unity game engine. Making an object, or a group of objects, part of our managed widget framework can be accomplished with just a few drag-and-drop operations in the Unity Editor. While our framework provides hooks and default implementations for common widget behavior (e.g., initialization, refresh, and toggling visibility), programmers can also define custom behavior for a particular widget or combine simple widgets into a hierarchy and build arbitrarily rich ones. Finally, we provide an overview of an accompanying library of scripts that support functionality for testing and networking.


2015

Virtual replicas for remote assistance in virtual and augmented reality

Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven Feiner, Barbara Tversky
ACM UIST 2015
[ACM DL] []
In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.

[POSTER] Interactive Visualizations for Monoscopic Eyewear to Assist in Manually Orienting Objects in 3D

Carmine Elvezio, Mengu Sukan, Steven Feiner, Barbara Tversky
IEEE ISMAR 2015
[IEEE Xplore] []
Assembly or repair tasks often require objects to be held in specific orientations to view or fit together. Research has addressed the use of AR to assist in these tasks, delivered as registered overlaid graphics on stereoscopic head-worn displays. In contrast, we are interested in using monoscopic head-worn displays, such as Google Glass. To accommodate their small monoscopic field of view, off center from the user's line of sight, we are exploring alternatives to registered overlays. We describe four interactive rotation guidance visualizations for tracked objects intended for these displays.


2014

ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations

Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven Feiner, Barbara Tversky
ACM UIST 2014
[ACM DL] []
Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.


2013

A user interface for assistive grasping

Jonathan Weisz, Carmine Elvezio, Peter K Allen
IEEE IROS 2013
[IEEE Xplore] []
There has been considerable interest in producing grasping platforms using non-invasive, low bandwidth brain computer interfaces(BCIs). Most of this work focuses on low level control of simple hands. Using complex hands improves the versatility of a grasping platform at the cost of increasing its complexity. In order to control more complex hands with these low bandwidth signals, we need to use higher level abstractions. Here, we present a user interface which allows the user to combine the speed and convenience of offline preplanned grasps with the versatility of an online planner. This system incorporates a database of pre-planned grasps with the ability to refine these grasps using an online planner designed for arbitrarily complex hands. Only four commands are necessary to control the entire grasping pipeline, allowing us to use a low cost, noninvasive commercial BCI device to produce robust grasps that reflect user intent. We demonstrate the efficacy of this system with results from five subjects and present results using this system to grasp unknown objects.