Author Archives:

Auditory Display as Feedback for a Novel Eye-Tracking System for Sterile Operating Room Interaction

David Black · Michael Unger · Nele Fischer · Ron Kikinis · Horst Hahn · Thomas Neumuth · Bernhard Glaser

International Journal of Computer Assisted Radiology and Surgery, accepted October 13, 2017

The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye- tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response.

An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for 6 typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures.

When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings.

Conclusion Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

Auditory Display for Supporting Image-Guided Medical Instrument Navigation in Tunnel-like Scenarios

David Black, Tim Ziemer, Christian Rieder, Horst Hahn, Ron Kikinis.

Introduction: Navigation information for clinical applications using tracked instruments is typically shown on a screen in an operating room. Instruments, e.g., dissector, needle, or fraise, are viewed in relation to preoperative planning data. Although visual methods provide useful information, clinicians must remove their view from the patient to view monitors often placed in uncomfortable locations. Transmitting navigation cues using auditory display instead of a screen can benefit the clinician in numerous ways, foremost by allowing visual attention to remain on the patient while receiving useful information about the placement of a tracked tool.

Methods: This work presents two auditory display methods to supplement visual methods for placement of medical instruments in cognitively demanding tunnel-like navigation tasks, such as for needle placement, image-guided laparoscopy, or transnasal robotcs, where an instrument must be navigated to remain on the origin of a plane orthogonal to the line to a planned target. Two novel auditory displays for instrument guidance are described: first, a note-based synthesizer that employs glissando direction (pitch bending) and stereo mix, and second, a virtual choir of sung syllables that guide the clinician towards a planned path.

Results: Results of a first evaluation of users using a think-aloud usability show that both methods can provide complete screen-free guidance to guide towards a target inside a virtual 3d tunnel model of a transnasal passage. The work describes benefits and drawbacks each method, providing insight for future applications of auditory display for medical navigation.

Conclusion: The methods allow blind guidance but are intended for future use in hybrid audiovisual solutions to provide an optimal combination of in-depth visualization and quick, efficient auditory cues when the clinician needs them most, thus increasing usability of navigation aids.

Auditory Display for Fluorescence-guided Brain Tumor Surgery

Auditory Display for Fluorescence-guided Brain Tumor Surgery. David Black, Horst Hahn, Ron Kikinis, Karin Wårdell, Neda Haj-Hosseini. (2017) In International Journal of Computer Assisted Radiology and Surgery (accepted September 2017)

Abstract:

Protoporphyrin (PpIX) fluorescence allows discrimination of tumor and normal brain tissue during neurosurgery. A hand-held fluorescence (HHF) probe can be used for spectroscopic measurement of 5-ALA-induced PpIX to enable objective detection compared to visual evaluation of fluorescence. However, current technology requires that the surgeon either views the measured values on a screen or employs an assistant to verbally relay the values. An auditory feedback system was developed and evaluated for communicating measured fluorescence intensity values directly to the surgeon.

The auditory display was programmed to map the values measured by the HHF probe to the playback of tones that represented three fluorescence intensity ranges and one error signal. Ten persons with no previous knowledge of the application took part in a laboratory evaluation. After a brief training period, participants performed measurements on a tray of 96 wells of liquid fluorescence phantom and verbally stated the perceived measurement values for each well. The latency and accuracy of the participants’ verbal responses were recorded, and long-term memorization of sound function was evaluated after 7-12 days.

The participants identified the played tone accurately for 98% of measurements after training. The median response time to verbally identify the played tones was 2 pulses. No correlation was found between the latency and accuracy of the responses and no significant correlation with the musical proficiency of the participants was observed on the function responses.

The employed auditory display was shown to be intuitive, easy to learn and remember, fast to recognize, and accurate in providing users with measurements of fluorescence intensity or error signal. The results of this work establish a basis for implementing and further evaluating auditory displays in clinical scenarios involving fluorescence guidance and other areas for which categorized auditory display could be useful.

Mixed Reality Navigation for Laparoscopic Surgery

Brian Xavier, Franklin King, Ahmed Hosny, David Black, Steve Pieper, Jagadeesan Jayender

The role of mixed reality that combines augmented and virtual reality in the healthcare industry, specifically in modern surgical interventions, has yet to be established. In laparoscopic surgeries, precision navigation with real-time feedback of distances from sensitive structures such as the pulmonary vessels is critical to preventing complications. Combining video-assistance with newer navigational technologies to improve outcomes in a simple, cost-effective approach is a constant challenge.

This study aimed to design and validate a novel mixed reality intra-operative surgical navigation environment using a standard model of laparoscopic surgery. We modified an Oculus Rift with two front-facing cameras to receive images and data from 3D Slicer and conducted trials with a standardized Ethicon TASKit surgical skills trainer.

Participants were enrolled and stratified based on surgical experience including residents, fellows, and attending surgeons. Using the TASKit box trainer, participants were asked to transfer pegs, identify radiolabeled pegs, and precisely navigate through wire structures. Tasks were repeated and incrementally aided with modalities such as 3D volumetric navigation, audio feedback, and mixed reality. A final task randomized and compared the current standard of laparoscopy with CT guidance with the proposed standard of mixed reality incorporating all additional modalities. Metrics such as success rate, task time, error rate, and user kinematics were recorded to assess learning and efficiency.

Conclusions: A mixed reality surgical environment incorporating real-time video-assistance, navigational, and radiologic data with audio feedback has been created for the purpose of better enabling laparoscopic surgical navigation with early validations demonstrating potential use cases.

Auditory Display for Ultrasound Scan Completion

Clinicians manually acquire sequences of 2D ultrasound images to evaluate the local situs in real-time. 3D volumes reconstructed from these sequences give clinicans a spatial overview of the area. Although 3D renderings are beneficial, drawbacks prohibit efficient interaction during acquisition. Current 2D image acquisition methods provide only one audible beep after each 2D scan added to the 3D volume, leaving the clinician without feedback about scan quality. This produces highly inhomogenous intensities of the anatomical structure with imaging artifacts, resulting in overexposed images and reduced image quality. Low-quality volumes must be reacquired, causing clinician frustration and wasted operation time. Auditory display maps information to parameters of sound synthesizers so a user can “hear” underlying data. This has been investigated to guide instruments or warn when clinicians approach risks, aiding clinicians to focus on the situs while still receiving information.

We harness auditory display for acquiring complete, high-quality scans. Our auditory display employs a granular synthesizer with 9 simultaneous sawtooth oscillators. An array with 100 cells represents an ultrasound volume, for which each cell represents one scan, with values ranging from 0 to 100 as the completeness of each individual scan. The synthesizer maps completeness of the current and 8 neighboring cells to pitch, pitch variation, noisiness, low-pass filter rolloff frequency, and stereo width of 9 grains. The synthesizer mimics a vacuum sucking up dust: incomplete areas are heard as scattered, noisier, higher pitch, whereas complete areas are stable, less noisy, and lower pitched. Pilot studies show the auditory display allows high-quality, efficient individual and overall scan completion completely without a monitor. Thus, using auditory display to augment US acquisition could ensure higher-quality scans and improve reconstruction while reducing the use of monitors during the procedure and helping clinicians keep their view on the situs.

A Survey of Auditory Display in Image-Guided Interventions

David Black, Christian Hansen, Arya Nabavi, Ron Kikinis, Horst Hahn. In International Journal of Computer Assisted Radiology and Surgery (accepted February 2017)

This article investigates the current state of the art of the use of auditory display in image-guided medical interventions. Auditory display is a means of conveying information using sound, and we review the use of this approach to support navigated interventions. We discuss the benefits and drawbacks of published systems and outline directions for future investigation.

We undertook a review of scientific articles on the topic of auditory rendering in image- guided intervention. This includes methods for avoidance of risk structures and instrument placement and manipulation. The review did not include auditory display for status monitoring, for instance in anesthesia.

We identified 14 publications in the course of the search. Most of the literature 62% investigates the use of auditory display to convey distance of a tracked instrument to an object using proximity or safety margins. The remainder discuss continuous guidance for navigated instrument placement. Four of the articles present clinical evaluations, 9 present laboratory evaluations, and 3 present informal evaluation (3 present both laboratory and clinical evaluations).

In summary, auditory display is a growing field that has been largely neglected in research in image-guided intervention. Despite benefits of auditory displays reported in both the reviewed literature and non-medical fields, adoption in medicine has been slow. Future challenges include increasing interdisciplinary cooperation with auditory display investigators to develop more meaningful auditory display designs and comprehensive evaluations which target the benefits and drawbacks of auditory display in image guidance.

Comparison of Auditory Display Methods for Elevation Change in Three-Dimensional Tracked Surgical Tool

David Black, Rocío Lopez-Velasco, Horst Hahn, Javier Pascau, Ron Kikinis. Computer Assisted Radiology and Surgery, June 2017

In image-guided interventions, screens display information to help the clinician complete a task, such as placing an instrument or avoiding certain structures. Often, clinicians wish to access this information without having to switch views between the operating situs and the navigation screen. To reduce view switches and help clinicians concentrate on the situs, so-called auditory display has been gaining attention as a means of delivering information to clinicians in image-guided interventions. Auditory display has been implemented in image-guided interventions to relay position information from navigation systems to locate target paths in liver resection marking, resect volumes with neuronavigation, and avoid risk structures in cochleostomy. Previous attempts provide primarily simple, non-directional warning signals and still require the use of a navigation screen. Clinical participants in previous attempts requested auditory display that provides directional cues. Our described method allows screen-free navigation of a tracked instrument with auditory display. However, because changes of y-axis instrument movement onto beneficial auditory display parameters has proven to be difficult in previous work, this paper compares two methods of mapping elevation changes onto auditory position parameter – one using pitch comparison between alternating tones, and another with slightly falling and rising frequencies (glissando) for each tone. In this work, we present a pilot study that compares time-to-target as a performance factor to compare these two methods.ComparisonCARS2017

The two auditory display methods are used to relay the position of a tracked instrument using sound. The methods described here relay elevation (changes in the y-axis), azimuth (changes in the x-axis) and distance along the perpendicular trajectory path (z-axis) from the tracked instrument towards a target. Because the methods are suited for applications involving 2D placement plus a 1D depth component, these generalized auditory displays allow tracking during a variety of clinical applications using tracked instruments, including resection path marking, ablation and biopsy needle placement, bone drilling, and endoscopic instrument placement.

Two methods were developed for comparison to relay the position of a tracked instrument using auditory display. Both employ the same mapping for changes in azimuth (x-axis) and depth (z-axis). For changes in elevation (y-axis), the first method employs pitch comparison. A tone with moving pitch between 261 Hz and 1046 Hz is alternated with a reference tone with static pitch of 523 Hz. This alternation allows the user to compare both pitches, bringing one towards another, similar to tuning a guitar or violin string. When the pitch of the moving tone reaches that of the reference tone, the correct elevation is reached. For the glissando (lit. “sliding”) method, only one moving tone is used. When the elevation is positive, the pitches of the tones “slide” down slightly, signaling that the instrument should be lowered. When elevation is negative, the tones “slide” up slightly, signaling that the instrument should be raised. A similar range is used, and the pitch of the tones “slide” ± 3 semitones (ca. ±19%).

For both methods, changes in azimuth are mapped to changes in stereo mapping. The tones described above are played in stereo to indicate whether the target is left or right of the current position. A “sound object” metaphor is employed, for example, when the instrument is to the right of the target, tones are heard in the left ear, indicating that the target is to the left of the listener. Changes in perpendicular distance to the target (z-axis) are mapped linearly to the inter-onset interval (duty cycle) of the tones, similar to an electronic car parking aid. At maximum distance, tones are played 900 ms apart; at the target, this is reduced to 200 ms.

A pilot study was performed with 10 non-expert participants to gauge the usability of each of the methods for elevation mapping. After a short training period with eyes open using a screen to become familiarized with the system, each participant completed two placements of the tracked instrument with eyes closed, i.e., blind placement without screen, for each of the two methods.

For the pitch comparison method, time-to-target averaged 57.1 seconds across all participants. For the glissando method, the time-to-target averaged 23.6 seconds. On a subjective difficulty scale of “low,” “medium,” “high,” and “very high,” half of the participants rated the difficulty of the pitch-comparison method as “low” and half as either “medium,” “high,” or “very high” difficulty. In contrast, all participants rated the glissando method as “low” difficulty. Participants commented on the high mental demand of hearing the reference tone and the task of comparing the two alternating tones.

Although the use of auditory display for image-guided navigation tasks has increased in recent years, previous attempts have primarily provided only basic warning signals, prompting clinicians request directional cues in auditory display. Whereas changes in azimuth can be mapped intuitively to changes in stereo mapping and distance to target can be mapped to inter-onset-interval thanks to participant familiarity with car parking aids, changes in elevation have proved troublesome to design. This pilot study compares two methods for mapping elevation – alternating pitch between two tones (“instrument tuning” metaphor) and sliding pitches for single tones (“glissando”). Results of the study show that the glissando method is promising, leading to reach the target faster and giving lower subjective difficulty rates by participants. Further studies should incorporate additional, refined auditory display methods and evaluate use in real clinical scenarios.

 

Instrument-Mounted Displays for Reducing Cognitive Load During Surgical Navigation

Surgical navigation systems rely on a monitor placed in the operating room to relay information. Optimal monitor placement can be challenging in crowded rooms, and it is often not possible to place the monitor directly beside the situs. The operator must split attention between the navigation system and the situs. We present an approach for needle-based interventions to provide navigational feedback directly on the instrument and close to the situs by mounting a small display onto the needle.

By mounting a small and lightweight smartwatch display directly onto the instrument we are able to provide navigational guidance close to the situs and directly in the operator’s field of view thereby reducing the need to switch the focus of view between the situs and the navigation system. We devise a specific variant of the established cross-hair metaphor suitable for the very limited screen space. We conduct an empirical user study comparing our approach to using a monitor and a combination of both.

Results from the empirical user study show significant benefits for cognitive load, user preference, and general usability for the instrument-mounted display, while achieving the same level of performance in terms of time and accuracy compared to using a monitor.

We successfully demonstrate the feasibility of our approach and potential bene- fits. With ongoing technological advancements instrument-mounted displays might complement standard monitor setups for surgical navigation in order to lower cognitive demands and for improved usability of such systems.

Auditory Feedback to Support Image-Guided Medical Needle Placement

During medical needle placement using image-guided navigation systems, theclinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle.

An auditory synthesis model using pitch comparison and stereo panning parameter mapping was developed to augment or replace visual feedback for navigated needle placement. In contrast to existing approaches which augment but still require a visual display, this method allows view-free needle placement.

Audiovisual feedback shows promising results and establish a basis for applying auditory feedback as a supplement to visual information to other navigated interventions, especially those for which viewing a patient is beneficial or necessary.

 

A Strategy to Improve Information Display in Navigated Surgery

Image-guided surgery is daily business in many hospitals nowadays. Pre-operative data and navigation systems provide different views to the task at hand, but as they are displayed on monitors surgeons need to integrate several spatial frames of reference in order to map the displayed data to the patient, which is a demanding task. Qualitative spatio-temporal representation and reasoning (QSTR) is a subfield of Artficial Intelligence which explicitly deals with formal abstract models of spatial knowledge. Based on our expertise in QSTR we argue for the integration of QSTR approaches to reduce cognitive load of surgeons regarding visual information display.