Open access peer-reviewed chapter

The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward

Written By

Jenna L. Gorlewicz, Jennifer L. Tennison, Hari P. Palani and Nicholas A. Giudice

Submitted: 25 October 2018 Reviewed: 29 October 2018 Published: 11 December 2018

DOI: 10.5772/intechopen.82289

From the Edited Volume

Interactive Multimedia - Multimedia Production and Digital Storytelling

Edited by Dragan Cvetković

Chapter metrics overview

1,178 Chapter Downloads

View Full Metrics

Abstract

Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this long-standing information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain.

Keywords

  • haptics
  • touchscreen-based accessibility
  • vibrotactile displays
  • multimodal interfaces
  • information-access technologies

1. Introduction

Lack of access to graphical information represents one of the most pervasive information access challenges faced by people who are blind and visually impaired (BVI). Although graphical information is ubiquitous in today’s digital world, the vast majority of this content is highly visual, regardless of setting. For instance, consider looking at graphs in a work report, diagrams in a classroom, figures in a magazine article, images on the internet, photographs of friends on social networking sites, or maps for determining your location and finding routes through an unfamiliar building or city. All of these scenarios consist of highly visual, digital information that is often only conveyed via graphical formats, often excluding low- and no-vision individuals from the content. While many of these visual products can be accessed through alternative means—figures have captions, web-based images have labels, social media photos are tagged, and so on—these text-based descriptions are only sometimes present, and often do not tell the whole story as they are not designed to do so. Unfortunately, their inclusion is more often the exception than the rule and when available, the description is generally short and imprecise, failing to capture much of the information conveyed by the graphical rendering. One need only to read a few alt tags of graphics on the web to demonstrate how poorly these text descriptions convey what is represented in the graphical depiction. Diversification of design to meet a range of accessibility needs in the digital space can make the information given more valuable to users who must access information in a different way [1]. With more content moving to the electronic space, it is paramount that new solutions for graphical information access are explored in the digital domain.

The aim of this chapter is to discuss some of the factors underlying the graphics access problem faced by people who are BVI and to describe the latest class of technologies and techniques that we believe have the most potential to mitigate the problem. We first characterize the persistent challenges that have perpetuated this long-standing information access issue. We then describe some general approaches developed throughout the years to address this challenge. We specifically focus on the role of touchscreen-based smart devices (e.g., phones and tablets), which our group believes is a promising solution moving forward. We then discuss some of the advantages and disadvantages of these devices and share a few ideological positions that we believe must be advanced if we are to truly address the graphical access challenge in the context of new technology development. This chapter sets forth a clear position on the efficacy of this class of information access technology (IAT) and advocates some paradigm shifts in the way that we think about addressing this vexing access problem. It is also meant to serve as a reference for researchers and developers interested in promoting graphical accessibility via new technologies such as touchscreens.

Advertisement

2. Graphical access for people with visual impairments

2.1. The persistent challenge of graphical access

We start by highlighting an important distinction of nonvisual information access between textual and nontextual information sources. Access to printed, text-based material has largely been solved for BVI individuals owing to significant advances over the past 30 plus years in the development of screen-reading software using text-to-speech engines (e.g., JAWS for Windows [2] or VoiceOver for the Mac and iOS-based devices [3]). Indeed, long before these digital speech-based solutions, the Braille code provided a robust system for conveying alpha-numeric information, as well as other literary, mathematical, and musical symbols that are embossed on hardcopy paper (for a review of the history of Braille, see [4]). The development of dynamic, refreshable Braille-display technologies since the 1970s has provided access to the braille code for real-time access to text, often in conjunction with synthetic speech via the aforementioned screen reader software packages. These hardware and software solutions differ widely in their form factor, connectivity, available features, and languages supported but they share a common shortcoming--they are limited to only providing access to textual information. The crux of the problem is that graphical information is almost exclusively rendered visually. In contrast to accessing text-based material, there is no analogous low-cost, intuitive, and commercially available solution for providing individuals who are BVI with dynamic access to visually rendered graphical content. Compounding the problem, compared to the wealth of knowledge that exists about human visual information processing, there is far less basic research addressing the sensory, perceptual, and cognitive factors that are critical for accurate encoding, interpretation, and representation of graphical information rendered using nonvisual channels such as audition or touch. While earlier studies have evaluated many human information processing characteristics for tangible graphics (i.e., pressure based physical stimuli) [5, 6, 7, 8, 9], these results cannot ensure saliency when adopted for rendering digital graphical elements on touchscreen interfaces (see [10, 11] for discussion). The reason stems from the nature of the stimuli and its mechanism of delivery. Vibrations from flat touchscreens provide no direct cutaneous cues as are afforded with traditional raised tangible graphics, and they trigger different sensory receptors compared to what is used when encoding traditional “raised” tactile graphics or models.

Lack of access to graphical material is more than a mere frustration or hindrance. Indeed, we argue that it represents one of the biggest challenges to the independence and productivity of individuals who are BVI and has had significant detrimental effects on the educational, vocational, and social prospects for this demographic. In support, consider troubling statistics that have estimated that up to 30% of blind people do not travel independently outside of their home [12], that only ~11% of persons who are BVI have a bachelor’s degree [13], and that over 70% of this demographic is unemployed or under-employed [14, 15]. This is not an isolated problem: over 12 million people in the U.S. and 285 million people worldwide are estimated as having some form of significant and uncorrected visual impairment [16]. Unfortunately, this problem is rapidly growing, and the current information gap will likely widen without a tractable solution as: (1) the incidence of people experiencing visual impairment is projected to double by 2030 owing to the aging of our population [17], (2) graphics are increasingly being used as the preferred medium of information exchange, and (3) print-based content is rapidly moving to the digital space. The growing reliance on graphical content is especially evident in educational contexts, where it has been estimated that scientific textbooks and journals contain 1.3 graphical representations per page [18]. The inability for students who are BVI to access this rich graphical content certainly helps explain the particularly low inclusion and success of this demographic in STEM disciplines [19, 20]. Outside of information access in education, the lack of accessibility of many sources of information used in daily life also inevitably contributes to the greater social isolation and depression experienced by individuals who are BVI [21]. Without question, a significant component of improving these statistics (and more importantly, benefitting the lives of BVI individuals at large) involves solving the long-standing information gap caused by lack of access to graphical materials in these domains.

2.2. Current solutions for graphical access

Traditional approaches to creating accessible, tangible graphics, include the use of: (1) a tactile embosser to produce hardcopy raised graphics (e.g., the Tiger embosser [22]); (2) renderings made on heat-sensitive swell paper (e.g., [23]); (3) physical manipulatives that are pinned or velcroed to a board [24]; or more recently, (4) 3D-printed models or manipulatives [25]. Figure 1 provides examples of these materials.

Figure 1.

Examples of traditional methods used to convey graphics (e.g., swell paper, embosser, Wikki Stix).

While these techniques certainly work, they also have several significant shortcomings that limit their efficacy as a robust and broadly applicable solution. The principle drawbacks of these solutions include: (1) the authoring process is often slow and cumbersome and typically requires an individual skilled in creating tactile graphics, (2) the equipment can be prohibitively expensive (e.g., a Tiger embosser can cost between $5,000 and $15,000, see [22]), (3) the technology is based on single-purpose hardware often requiring individuals to use an “army of devices” in their daily life, (4) the output is a static representation that can quickly become obsolete and is neither easy nor quick to update, and (5) the output is largely restricted to a single modality (i.e., touch). A lengthier discussion of these limitations and the challenges they pose can be found in Ducasse, Brock, and Jouffrais’ review of maps for individuals with BVI [26].

Some of these barriers have been addressed through technology development, with the biggest benefit coming from the use of dynamic touch-based interfaces. For instance, a host of refreshable tactual technologies have been developed based on force feedback, refreshable pin arrays, micro fluidics, and moldable alloys. The thorough review by O’Modhrain and colleagues details the pros and cons of each of these approaches [27]. While such technology developments are pushing the boundaries of new haptic technologies as a means for access, these solutions are not widely available nor broadly adopted. This is likely due to several factors including the high cost and lack of commercial availability associated with most of the haptic systems, the in-depth manufacturing and fabrication process required for some of the technologies, and the need for additional hardware that only adds to the host of access devices and technologies already used by BVI persons.

The promise of low-cost, large-format, dot-based graphic displays has been made for decades and some examples are or were commercially available, such as the DotView from KGS Corporation [28] or the Graphic Window by Handytech [29, 30]. Other approaches have exploited auditory solutions, converting the visually-based information into an acoustic format that employs different sonification techniques and auditory parameters (e.g., pitch, loudness, timbre, or tempo) to convey the graphical content [31, 32, 33]. Additional efforts have explored utilizing language-based descriptions to convey graphical information [34, 35]. Auditory and verbal approaches, however, are not optimal as they are based on an interpretive medium that requires cognitive mediation and greater maintenance in attention [36]. Such feedback can also be distracting when accessing information in quiet environments such as classrooms or in a meeting while simultaneously trying to listen to presenters. In addition, we argue that these auditory/linguistic approaches are not as suited to conveying spatial graphics as are touch-based solutions because they do not directly specify spatial relations or provide the necessary kinesthetic feedback that enables spatial organization of information.

The above notable approaches have certainly pushed the possibilities of graphical access, yet it is important to note that simply providing dynamic nonvisual information is not sufficient for conveying and learning graphical materials. In order to effectively meet the larger purpose of what is needed to truly solve the information gap, it is necessary to consider design characteristics that will lead to user acceptance and adoption by the BVI community. These factors include being inexpensive, multi-purpose, multimodal, and readily available. Indeed, many of the solutions discussed above are generally relegated to highly specialized applications and require purpose-built equipment that is designed for specific users, to support specific tasks or needs, in a specific situation or environment. This specificity means that most haptic IATs, even if effective, are too expensive, too limited in their usage applications, too cumbersome, and unduly subject to obsolescence to be viable, long-term information-access solutions for BVI users. There are a growing number of new technologies coming to market that build upon previous work, such as the Graphiti, American Printing House (APH)’s dynamic touch-sensitive pin array [37]; the BLITAB tablet, which is capable of a full page of braille [38]; shapeShift, a refreshable multi-height pin display that can render 3D objects and dynamic movement [39]; and microfluidic-based tablets that are capable of refreshable, raised dots on tablets (e.g., [40, 27]) (see Figure 2). Most of these devices, however, are still in the research phase, and many still suffer from high component costs or reliance on hardware-specific platforms, thereby reducing the likelihood of such devices becoming a mainstream solution.

Figure 2.

New innovative solutions being developed for individuals with BVI: upper left—demonstration of shapeShift (multi-height pin array) [39]; upper right—Graphiti (refreshable pin array) [37]; lower left—BLITAB (refreshable pin tablet) [38]; and lower right—Holy Braille (microfluidic tablet) [40].

While the above innovative approaches have various benefits, we posit that a more broadly adoptable solution is to use technology that: (1) provides direct perceptual access to the graphical content, as is the case via visual access, (2) is (or could be) mass marketed and readily available among end users, and (3) is based on a computational platform that can be leveraged for other functions/activities. We argue that this is best accomplished using dynamic touch-based (or multimodal) displays implemented on smart devices (phones/tablets). We believe that interfaces leveraging direct touch access are critical in solving the graphical access problem as touch has much in common with visual spatial perception, sharing many parallels with the visual pathways in the brain (e.g., [41, 42]). For example, both modalities extract the basic features and spatiality of an object in the environment and integrate this information to form a complete, coherent representation of the object formed in memory. This lends credence to parallel or shared channels in perception [43, 44]. Further, auditory and verbal approaches often involve more cognitive effort and are thus less “perceptual” than touch-based or visually-based information displays [45]. This is not to say that auditory and verbal approaches should be ignored. To the contrary, we believe in synergizing all available modalities, as is done in some capacity on current vibrotactile touchscreen platforms today, and leveraging the appropriate constituent inputs for best supporting the information to be rendered and the task to be performed. While there are various types of haptic displays, each with their own strengths and weaknesses, the position advanced in this paper is that vibrotactile stimulation, when paired with a touchscreen equipped smart device (e.g., phone or tablet) and other output channels, is a highly promising approach for solving the nonvisual graphics access problem. We believe this platform is quickly becoming the de facto gold standard for IAT and offers a solution that has a high likelihood of being accepted and adopted among its end users, which should be the goal of any IAT design.

2.3. Why vibrotactile, touchscreen-based smart solutions?

We have all experienced our phone vibrating in our pocket to indicate an incoming call or to alert us of an upcoming meeting. However, beyond soliciting our attention, providing simple alerts, signaling a confirmation or error, or any number of other instances of secondary or tertiary cuing, people rarely consider the role of vibrotactile feedback as a primary interaction style. On the one hand, this is surprising given the multitude of common interactions we experience that involve vibration in one capacity or another. Consider the slight detents you feel when spinning the scroll wheel on your computer mouse or the volume dial on your car radio, the signal from your electric toothbrush indicating to brush in another location, the rumble from your game controller indicating an undesired behavior, the alert from the buzzer indicating that your party is being summoned at a restaurant, the vibrating seat in your car indicating that you are backing up near an obstruction, and a myriad of other haptic implementations in current technologies that employ vibrotactile cues for nonvisually conveying relevant information. On the other hand, even if informative, this information is usually either an unintended byproduct of an action, (e.g., vibration from approaching an obstacle), or a secondary cue that is part of a primary interface, (e.g., detents that simply provide frictional control over a spinning wheel/dial). They are often not necessary for its function or primary operation. Indeed, rarely is vibrotactile cuing considered as a primary interaction style. In this chapter, we argue that this need not be the case and that vibrotactile feedback is not only vastly underutilized in current interface design but that vibration can serve as a primary mode of user interaction, especially in conditions where visual access is not possible, such as for use by individuals who are BVI or in eyes-free applications (e.g., driving). We now summarize the current state of research on vibrotactile touchscreen displays before sharing four positions our group believes are needed toward addressing the graphical access challenge moving forward.

2.4. Research brief on vibrotactile touchscreen displays

A growing body of research has demonstrated the efficacy of using touchscreen-based devices and vibrotactile or vibrotactile plus auditory information as a primary interaction style for conveying graphical information. Choi and Kuchenbecker provide an excellent review of vibrotactile displays from both a perceptual and technological perspective, summarizing foundational knowledge in this area and providing implementation guidelines for exemplary applications [46]. Brewster and colleagues have also done extensive work exploring tactile feedback, particularly from mobile platforms, and have demonstrated important findings illustrating how structured tactile messages (Tactons) can be used to communicate information using different vibration features [47, 48, 49]. Other research has demonstrated that vibrotactile feedback enables users to complete scrolling and inputting tasks faster on a mobile device compared to interfaces that lack such feedback [50, 51], and can improve textual reading in braille (e.g., [52, 53, 54, 55]). More recent examples have focused on using vibrotactile touchscreen platforms for conveying graphics. A recent project has shown that lines (linear and non-linear) and basic shapes (e.g., circles, triangles, squares) can be successfully interpreted and followed nonvisually through haptic, audio, and haptic-audio access on the touchscreen [5]. Further examples demonstrating the efficacy of this approach were shown when exploring grids [56], graphs [57], maps [58], and nonvisual panning and zooming of large format vibrotactile maps that extended beyond the device’s display [11, 59]. In aggregate, this research clearly illustrates the broad potential of this multimodal approach. Work with a prototype system, called a vibro-audio interface (VAI), based on a commercial tablet, has shown near identical accuracy between use of the VAI and hardcopy tactile stimuli for graph interpretation, pattern detection, and shape recognition [60]. In corroboration, studies by Gorlewicz and colleagues have demonstrated no significant differences in the interpretation of a variety of graphics including bar graphs, pie charts, tables, number lines, line graphs, and simple maps that were presented in embossed form and displayed multimodally on a touchscreen created by Vital [61, 62]. Not only do these studies show the efficacy of this interface, but also that this multimodal platform can achieve similar performance to the gold standard of hardcopy graphics. More recent work by our group has also explored the effect of screen size on the success of these tasks (e.g., tablets versus smaller mobile platforms), and we have shown that performance on a pattern matching task is equivalent across small and large screen sizes [63]. Even though this is a low resolution output mode, these data show that vibrotactile graphics can still be used effectively and accurately when rendered on the smaller form factor of phone-sized smart devices. This is a positive finding, as the majority of BVI users of smart devices are using mobile phones. A recent review by Grussenmeyer and colleagues provides a thorough survey of how touchscreen-based technologies have been used to support information access by people who are BVI and reiterates the prevalent challenges that exist to bring full inclusion to this population [64]. In short, many of these projects suggest promising pathways forward for vibrotactile touchscreens, supported with empirical evidence and positive qualitative feedback of their capacity to convey multimodal information for the interpretation of visual graphics. Moreover, these platforms offer several significant advantages to one-off information access hardware, with the primary benefits being portability, multi-functional use, relative affordability, and widespread adoption and support by the BVI demographic. Indeed, vibrotactile touchscreens provide a robust multimodal framework, which if continually developed in conjunction with advances in touchscreen-based smart devices, has the potential to become the de-facto, universal means for accessing graphics in a multimodal, digital form (for example, see Figure 3). A universal, multimodal platform that is widely available is not only beneficial for the BVI population but extends to many others who benefit from multimodal learning platforms and the brain’s capacity to process both redundant and complementary information from different senses.

Figure 3.

Touchscreens can leverage both auditory and vibrotactile feedback to convey rich information without the need to look at the screen.

2.5. Positions and pathways forward

While there are promising pathways forward, the graphical access challenge for BVI individuals remains a vexing and largely unsolved problem. We argue that the solution requires advancements on several fronts, including ideological, technological, and perceptual. While there has been significant research advancing our understanding of the technological and perceptual pieces (as illustrated in the vibrotactile touchscreen use case presented here), we also want to call the community to consider new ideological perspectives that will advance the field as a whole. Specifically, we present four positions that our group views as necessary for moving closer to addressing the graphical access challenge and that we see as being best addressed by vibrotactile touchscreen technology:

  1. A shift in thinking of assistive technologies as single-purpose, specialized hardware solutions to considering mainstream technologies (and simple adaptations to them) as a first choice for a development platform.

  2. A shift in the traditional approach of retrofitting existing technologies for accessibility to embedding universal design in technologies from the onset.

  3. A shift in using unimodal feedback as a primary mode of interaction to leveraging all modalities available for primary interactions.

  4. A shift in designing based on features and capabilities to a principled design approach driven by end user needs that is scoped by practical guidelines supporting efficient and effective usage/implementation.

We briefly elaborate on these positions below.

2.6. Ideological requirements

2.6.1. A shift from using single-purpose, specialized hardware solutions to considering mainstream, multi-use technologies

To truly advance this class of technology, we need a shift from thinking of assistive technologies as being specialized, single-purpose hardware/software supporting a single (niche) user group to being incorporated in a commercial platform supporting multiple functions that can be used by a broad range of people. Of course, specialized equipment is necessary in certain instances—if you want a hardcopy page of braille or to emboss a physical tactile map, you will need a specialized Braille/graphics embosser. However, in many instances, nonvisual access to information can be delivered using standard commercial devices, which has the advantage of vastly decreasing the development costs and purchase price, thereby increasing actual adoption by BVI users. One example of this is text-to-speech engines, which provide access to visually-based textual information on the screen via speech output. While an intervening software layer is needed to efficiently analyze the video model and represent this information in an intuitive manner for auditory output, the requisite hardware involving a sound card and speaker output is already available on almost all commercial devices. Adding speech input requires a mic, which is also on all smart devices, as is embedded speech-to-text software. In the spirit of this chapter, this idea can be extended to include tactile feedback. Many current touchscreen displays have vibration capabilities in some form. Using the standard vibration motor can open pathways to a whole new universe of haptic information that can augment, complement, or completely replace other modes of feedback.

As such, the traditional notion of developing highly specialized assistive technology for specific groups of users (e.g., BVI users) as a completely separate process from mainstream technology needs to be reconsidered. This shift is more about a mindset than the technology itself. That is, designers of assistive technology should start with the goal of using commercial hardware and existing software platforms when possible. They should first consider how to creatively use the built-in components of the system and the existing feature set of the interface to solve the problem before resorting to the use of specialized one-off hardware or software development. Using existing hardware, computational platforms, sensors, and other components when possible and making the access layer as implemented in software as possible betters the overall commercial product while also reducing the price of developing accessible technologies at large.

2.6.2. A shift from retrofitting existing technologies to embedding universal design from the onset

We posit that mass market companies (and researchers) developing mainstream products should embrace the notion of universal and inclusive design in their R&D process, as this not only results in products that will benefit the greatest number of users (thereby increasing their pool of potential customers) but will also have many unintended positive results that will better support core users. Consider Apple, who developed a completely inaccessible product (the iPhone) in 2007. Although touchscreen technology has been around for a long time, Apple’s 2007 introduction of the iPhone brought them to the mass market. Initially, this was considered a huge set back to accessibility for blind consumers, as this new disruptive technology was based around a flat, featureless glass surface with no screen reader to provide text-to-speech. As such, blind users were completely unable to access the native input or output functions with these devices. However, in 2009, Apple released the iPhone 3GS, which included the VoiceOver screen reader and a host of associated interactive gestures as part of the native operating system (iOS 3.0). Overnight, this release propelled Apple from a company who had ostensibly abandoned their long history supporting BVI users to the leader of mobile accessibility. TalkBack, the Android analog to VoiceOver, was also released in 2009, though it has been slower to gain momentum among the BVI community compared with iOS-based devices. Almost immediately, the iPhone became one of the most accessible pieces of assistive technology even though it was not designed to be an assistive technology in and of itself. For example, VoiceOver was designed to assist BVI users on the iPhone, but it was built-in to the native OS, rather than requiring an expensive, separate, stand-alone software package, as is the traditional model of selling screen-reader software. In addition to this universal design aspect, VoiceOver’s inclusion had many unintended benefits to other markets that would have not been realized if it had not been included. For instance, self-voicing benefits people using English as a second language, it helps those with learning disabilities, and it is used regularly by individuals for proof reading. This revealed further pathways, where app developers leveraged features like the Siri personal assistant and other built-in sensors to develop apps that support accessibility in a wide variety of applications. Examples of these include apps that can read barcodes, can tell you about your surroundings, can describe a picture to you, can read money to you, and so on [65]. The exponential growth and broad-based proliferation of touchscreen-based devices has been an amazing boon for access technology. For the first time, it is now possible to incorporate most of the expensive, stand-alone devices that were previously required for information access, as fully accessible apps on the phone. The rapid development of apps harnessing this power, mobile flexibility, diversity of usage scenarios, and user groups means that all roads (at least from a computing standpoint) lead to incorporating some aspect of these technologies, and this has broad-based benefits that extend across demographics. Further, the incorporation of multimodal feedback—visual, aural, and touch—expands the possibilities and capabilities that can be achieved through these new developments. To maximize the broader impacts possible when incorporating inclusive/universal design, we strongly encourage developers to leverage all communication channels available from the onset of the design and implementation process.

2.6.3. A shift from relying on unimodal feedback to leveraging all modalities available for primary interactions

Many hardware platforms today rely heavily on unimodal feedback. Even if they have multimodal capabilities, many of these multimodal interactions are significantly underutilized and sparsely implemented. Additionally, many of them are only implemented as a means for input or output, but not both, with additional modalities being used only for secondary or tertiary cueing. For example, touchscreens currently can provide visual, auditory, and vibrotactile information, yet they are generally only thought of as visual input/output interfaces. Despite having built-in vibration capabilities, vibrotactile cues are usually only used for conveying information about alerts or confirmation of an operation, not as a primary mode of extracting key information during user interactions or as input to the system. Acknowledging and enabling multimodal information as a primary means of input and output interaction is an important design consideration moving forward. This chapter provides several examples of research illustrating the benefits of leveraging all modalities available on touchscreens, with a specific focus on its potential to address the graphical access problem for BVI individuals. We note that there are likely several other unintended positive outcomes that would result should such an approach be adopted with touchscreens and other technologies if multimodal capabilities were leveraged equally in the user experience.

2.6.4. A shift from designing based on interface features to designing based on end-user needs

A critical first step here is overcoming the engineering trap, i.e., designing based on maximizing features and developer interests. The better approach is adopting a principled user-based design philosophy from the onset that considers the most relevant features ensuring the greatest functional utility for the end-user. The context of the technology implementation, how it will be deployed and used, how it compares to current tools, and where it falls short or excels are all worthy investigations that need to be explored. Most importantly, adhering to standards and guidelines to scope when and where a given technology is (or is not) appropriate are necessary. Success here often requires interdisciplinary research that cuts across several domains, involves multiple stakeholders in the process, and incorporates iterative end user assessment and participation. While advancements in technology will certainly open up new pathways, we, as designers, must also be open and cognizant to the reality that more advanced technology does not necessarily mean an immediately better solution. New technologies and advancements should be probed from multiple perspectives and should be situated and contextualized in practical use case scenarios that consider known perceptual and cognitive capabilities. While this approach may not be the fastest or the easiest path, it is certainly the one that will best inform when and how a new product will be most successful and when and where it will not work. Our group has come together to do this for vibrotactile touchscreens, and we are encouraged by the growing number of teams who are also adopting this design approach. We acknowledge that this user-centered, needs-based, principled design model takes a great deal of time and resources, and that all technology developments begin with feasibility studies. We are hoping to encourage communities of researchers and technology developers to come together to extend these inquiries and tackle this challenge from multiple perspectives, with the shared goal of driving it to its full potential. We further encourage researchers to disseminate and share their work, and when possible, to open SDK’s, API’s, and hardware platforms for community access, contribution, and growth.

Advertisement

3. Conclusions and future research

We believe that a principled solution to graphical access, designed from the onset to maximize the perceptual and cognitive characteristics of nonvisual and multimodal information processing, while also meeting the most pressing information access needs of the target demographic, could have broad and immediate societal impact. In this chapter, we highlight both the challenges and the vast potential of touchscreen-based smart devices as a platform for alleviating the graphics accessibility gap. We review the state of the art in this line of research and present positions and pathways forward for addressing the graphical access challenge from multiple perspectives. We do this specifically from an ideological standpoint, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain. Despite the need for more research, we see vibrotactile touchscreen platforms as a promising springboard for bringing multimodal, nonvisual graphical access into the hands of individuals everywhere. Because of their portability, availability, capabilities, and wide adoption among the BVI community, multimodal touchscreen interfaces are poised to serve as a model for universally designed consumer technologies that are also effective assistive technologies. These multimodal interfaces are also poised to close the accessibility gap while serving as a model for how we think about accessibility in the context of a new technological era.

Advertisement

Acknowledgments

This material is based upon work by the National Science Foundation Grant No. 1425337 and Grant No. 1644471. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Advertisement

Conflict of interest

Jenna Gorlewicz is also Co-Founder and President of JLG Innovations, LLC. Hari Palani is also Co-Founder and President for Unar Lab, LLC. Nicholas Giudice is also Co-Founder and Director of Research for Unar Lab, LLC.

References

  1. 1. Metatla O, Serrano M, Jouffrais C, Thieme A, Kane S, Branham S, et al. Inclusive education technologies: Emerging opportunities for people with visual impairments. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems; ACM. 2018. p. W13
  2. 2. Freedom Scientific [Internet]. 2018. Available From: https://www.freedomscientific.com/ [Accessed: 2018-09-08]
  3. 3. Apple. Accessibility [Internet]. 2018. Available from https://www.apple.com/accessibility/mac/vision/ [Accessed: 2018-09-08]
  4. 4. Foulke E. Reading braille. Tactual Perception: A Sourcebook. Cambridge [Cambridgeshire], New York: Cambridge University Press; 1982. p. 168
  5. 5. Tennison JL, Gorlewicz JL. Toward non-visual graphics representations on vibratory touchscreens: Shape exploration and identification. In: International Conference on Human Haptic Sensing and Touch Enabled Computer Applications. Cham: Springer; 2016. pp. 384-395
  6. 6. Craig JC. Grating orientation as a measure of tactile spatial acuity. Somatosensory & Motor Research. 1999;16(3):197-206
  7. 7. Johnson KO, Phillips JR. Tactile spatial resolution. I. Two-point discrimination, gap detection, grating resolution, and letter recognition. Journal of Neurophysiology. 1981;46(6):1177-1192
  8. 8. Loomis JM. Tactile pattern perception. Perception. 1981 Feb;10(1):5-27
  9. 9. Loomis JM, Lederman SJ. Tactual perception. In: Handbook of Perception and Human Performances. 1986;2:2
  10. 10. Loomis JM, Klatzky RL, Giudice NA. Sensory substitution of vision: Importance of perceptual and cognitive processing. In: Assistive Technology for Blindness and Low Vision. Boca Raton, Florida: CRC Press; 2012. pp. 179-210
  11. 11. Palani HP. Principles and guidelines for advancement of touchscreen-based non-visual access to 2D spatial information [thesis]. Orono: Maine University of Maine; 2018
  12. 12. Clark-Carter DD, Heyes AD, Howarth CI. The efficiency and walking speed of visually impaired people. Ergonomics. 1986;29(6):779-789
  13. 13. Erickson W, Lee C, von Schrader S. Disability Statistics from the 2011 American Community Survey (ACS). Ithaca, NY: Cornell University Employment and Disability Institute (EDI); 2012. Retrieved June 28, 2018 from www.disabilitystatistics.org
  14. 14. Chua B, Mitchell P. Consequences of amblyopia on education, occupation, and long term vision loss. British Journal of Ophthalmology. 2004;88(9):1119-1121
  15. 15. Kaye SH, Kang T, LaPlante MP. Mobility Device Use in the United States. Washington, DC; 2000
  16. 16. World Health Organization. Visual impairment and blindness. Fact Sheet No. 282. 2011. Available from http://www.who.int/mediacentre/factsheets/fs282/en/ [Accessed: 2012-06-02]
  17. 17. Friedman DS, O’Colmain BJ, Munoz B, Tomany SC, McCarty C, De Jong PT, et al. Prevalence of age-related macular degeneration in the United States. Archives of Ophthalmology. 2004;122(4):564-572
  18. 18. Roth WM, Bowen GM, McGinn MK. Differences in graph-related practices between high school biology textbooks and scientific ecology journals. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching. 1999;36(9):977-1019
  19. 19. National Federation of the Blind. Statistical Facts about Blindness in the United States. 2017. Available from: https://nfb.org/blindness-statistics [Accessed: 2018-09-08]
  20. 20. Moon NW, Todd RL, Morton DL, Ivey E. Accommodating Students with Disabilities in Science, Technology, Engineering, and Mathematics (STEM). Atlanta, GA: Center for Assistive Technology and Environmental Access, Georgia Institute of Technology; 2012
  21. 21. Nyman SR, Gosney MA, Victor CR. Psychosocial impact of visual impairment in working-age adults. British Journal of Ophthalmology. 2010;94(11):1427-1431
  22. 22. View Plus. 2018. Available from: http://www.viewplus.com [Accessed: 2018-09-08]
  23. 23. American Thermoform. Swell Touch Paper. 2018. Available from: http://www.americanthermoform.com/product/swell-touch-paper/ [Accessed: 2018-09-08]
  24. 24. Polly E. Tactile Graphics. New York: American Foundation for the Blind; 1992
  25. 25. Buehler E, Kane SK, Hurst A. ABC and 3D: Opportunities and obstacles to 3D printing in special education environments. In: Proceedings of the 16th international ACM SIGACCESS conference on Computers & Accessibility; Oct 20; ACM. 2014. pp. 107-114
  26. 26. Ducasse J, Brock AM, Jouffrais C. Accessible interactive maps for visually impaired users. In: Mobility of Visually Impaired People. Cham: Springer; 2018. pp. 537-584
  27. 27. O’Modhrain S, Giudice NA, Gardner JA, Legge GE. Designing media for visually-impaired users of refreshable touch displays: Possibilities and pitfalls. IEEE Transactions on Haptics. 2015;8(3):248-257
  28. 28. KGS Corporation. 2018. Available from: http://www.kgs-jpn.co.jp/ [Accessed: 2018-09-08]
  29. 29. Help Tech. Handytech. 2018. Available from: https://handytech.de/en/welcome [Accessed: 2018-09-08]
  30. 30. Vidal-Verdú F, Hafez M. Graphical tactile displays for visually-impaired people. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2007;15(1):119-130
  31. 31. Nees MA, Walker BN. Auditory interfaces and sonification. In: Stephanidis C, editor. The Universal Access Handbook. New York: CRC Press; 2009. pp. 507-521
  32. 32. Walker BN, Cothran JT. Sonification Sandbox: A Graphical Toolkit for Auditory Graphs. Georgia Institute of Technology
  33. 33. Walker BN, Mauney LM. Universal design of auditory graphs: A comparison of sonification mappings for visually impaired and sighted listeners. ACM Transactions on Accessible Computing (TACCESS). 2010;2(3):12
  34. 34. Ferres L, Lindgaard G, Sumegi L, Tsuji B. Evaluating a tool for improving accessibility to charts and graphs. ACM Transactions on Computer-Human Interaction (TOCHI). 2013;20(5):28
  35. 35. Ferres L, Parush A, Roberts S, Lindgaard G. Helping people with visual impairments gain access to graphical information through natural language: The igraph system. In: International Conference on Computers for Handicapped Persons. Berlin, Heidelberg: Springer; 2006. pp. 1122-1130
  36. 36. Giudice NA, Palani HP, Brenner E, Kramer KM. Learning non-visual graphical information using a touch-based vibro-audio interface. In: Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility; 2012 Oct 22; ACM. pp. 103-110
  37. 37. American Printing House for the Blind. Graphiti. 2018. Available from http://www.aph.org/graphiti/ [Accessed: 2018-09-08]
  38. 38. Blitab Technology. 2018. Available from: http://blitab.com/ [Accessed: 2019-09-08]
  39. 39. Siu AF, Gonzalez EJ, Yuan S, Ginsberg JB, Follmer S. Shapeshift: 2D spatial manipulation and self-actuation of tabletop shape displays for tangible and haptic interaction. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems; Apr 21; ACM. 2018. p. 291
  40. 40. Russomanno A, Gillespie RB, O'Modhrain S, Burns M. The design of pressure-controlled valves for a refreshable tactile display. In: World Haptics Conference (WHC); 2015 Jun 22; IEEE. 2015. pp. 177-182
  41. 41. Martino G, Marks LE. Cross-modal interaction between vision and touch: The role of synesthetic correspondence. Perception. 2000;29(6):745-754
  42. 42. Thesen T, Vibell JF, Calvert GA, Österbauer RA. Neuroimaging of multisensory processing in vision, audition, touch and olfaction. Cognitive Processing. 2004;5(2):84-93
  43. 43. Pensky AE, Johnson KA, Haag S, Homa D. Delayed memory for visual—Haptic exploration of familiar objects. Psychonomic Bulletin & Review. 2008;15(3):574-580
  44. 44. Ricciardi E, Bonino D, Gentili C, Sani L, Pietrini P, Vecchi T. Neural correlates of spatial working memory in humans: A functional magnetic resonance imaging study comparing visual and tactile processes. Neuroscience. 2006;139(1):339-349
  45. 45. Giudice NA. 15. Navigating without vision: Principles of blind spatial cognition. In: Handbook of Behavioral and Cognitive Geography. 2018. p. 260
  46. 46. Choi S, Kuchenbecker KJ. Vibrotactile display: Perception, technology, and applications. Proceedings of the IEEE. 2013;101(9):2093-2104
  47. 47. Hoggan E, Anwar S, Brewster SA. Mobile multi-actuator tactile displays. In: International Workshop on Haptic and Audio Interaction Design. Berlin, Heidelberg: Springer; 2007. pp. 22-33
  48. 48. Hoggan E, Brewster SA, Johnston J. Investigating the effectiveness of tactile feedback for mobile touchscreens. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; ACM. 2008. pp. 1573-1582
  49. 49. Hoggan E, Kaaresoja T, Laitinen P, Brewster SA. Crossmodal congruence: The look, feel and sound of touchscreen widgets. In: Proceedings of the 10th International Conference on Multimodal Interfaces; ACM. 2008. pp. 157-164
  50. 50. Poupyrev I, Maruyama S, Rekimoto J. Ambient touch: Designing tactile interfaces for handheld devices. In: Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology; ACM. 2002. pp. 51-60
  51. 51. Yatani K, Truong KN. Sem feel: A user interface with semantic tactile feedback for mobile touch-screen devices. In: Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology; ACM. 2009. pp. 111-120
  52. 52. Nicolau H, Guerreiro J, Guerreiro T, Carriço L. UbiBraille: Designing and evaluating a vibrotactile braille-reading device. In: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. 2013. p. 23
  53. 53. Al-Qudah Z, Doush IA, Alkhateeb F, Al Maghayreh E, Al-Khaleel O. Reading braille on mobile phones: A fast method with low battery power consumption. In: 2011 International Conference on User Science and Engineering (i-USEr); IEEE. 2011. pp. 118-123
  54. 54. Jayant C, Acuario C, Johnson W, Hollier J, Ladner R. V-braille: Haptic braille perception using a touch-screen and vibration on mobile phones. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility; ACM. 2010. pp. 295-296
  55. 55. Rantala J, Raisamo R, Lylykangas J, Surakka V, Raisamo J, Salminen K, et al. Methods for presenting braille characters on a mobile device with a touchscreen and tactile feedback. IEEE Transactions on Haptics. 2009;2(1):28-39
  56. 56. Gorlewicz JL, Burgner J, Withrow TJ, Webster IIIRJ. Initial experiences using vibratory touchscreens to display graphical math concepts to students with visual impairments. Journal of Special Education Technology. 2014;29(2):17-25
  57. 57. Klatzky RL, Giudice NA, Bennett CR, Loomis JM. Touch-screen technology for the dynamic display of 2D spatial information without vision: Promise and progress. Multisensory Research. 2014;27(5-6):359-378
  58. 58. Poppinga B, Magnusson C, Pielot M, Rassmus-Gröhn K. TouchOver map: Audio-tactile exploration of interactive maps. In: Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services; ACM. 2011. pp. 545-550
  59. 59. Palani HP, Giudice U, Giudice NA. Evaluation of non-visual zooming operations on touchscreen devices. In: International Conference on Universal Access in Human-Computer Interaction. Cham: Springer; 2016. pp. 162-174
  60. 60. Giudice NA, Palani HP, Brenner E, Kramer KM. Learning non-visual graphical information using a touch-based vibro-audio interface. In: Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility; Oct 22; ACM. 2012. pp. 103-110
  61. 61. ViTAL. 2018. Available from: https://www.vital.education/ [Accessed: 2018-09-08]
  62. 62. Hahn ME, Mueller CM, Gorlewicz JL. The comprehension of STEM graphics via a multi-sensory tablet in students with visual impairment. Journal of Visual Impairment and Blindness. 2018. In Press
  63. 63. Tennison JL, Carril ZS, Giudice NA, Gorlewicz JL. Comparing haptic pattern matching on tablets and phones: Large screens are not necessarily better. Optometry and Vision Science. 2018;95(9):720-726
  64. 64. Grussenmeyer W, Folmer E. Accessible touchscreen technology for people with visual impairments: A survey. ACM Transactions on Accessible Computing (TACCESS). 2017;9(2):6
  65. 65. Braille Works. 5 Top Mobile Apps for the Blind. 2015. Available from: https://brailleworks.com/5-top-mobile-apps-for-the-blind/ [Accessed: 2018-09-08]

Written By

Jenna L. Gorlewicz, Jennifer L. Tennison, Hari P. Palani and Nicholas A. Giudice

Submitted: 25 October 2018 Reviewed: 29 October 2018 Published: 11 December 2018