Open access peer-reviewed chapter

The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward

By Jenna L. Gorlewicz, Jennifer L. Tennison, Hari P. Palani and Nicholas A. Giudice

Submitted: October 8th 2018Reviewed: October 29th 2018Published: December 11th 2018

DOI: 10.5772/intechopen.82289

Downloaded: 126

Abstract

Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this long-standing information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain.

Keywords

  • haptics
  • touchscreen-based accessibility
  • vibrotactile displays
  • multimodal interfaces
  • information-access technologies

1. Introduction

Lack of access to graphical information represents one of the most pervasive information access challenges faced by people who are blind and visually impaired (BVI). Although graphical information is ubiquitous in today’s digital world, the vast majority of this content is highly visual, regardless of setting. For instance, consider looking at graphs in a work report, diagrams in a classroom, figures in a magazine article, images on the internet, photographs of friends on social networking sites, or maps for determining your location and finding routes through an unfamiliar building or city. All of these scenarios consist of highly visual, digital information that is often only conveyed via graphical formats, often excluding low- and no-vision individuals from the content. While many of these visual products can be accessed through alternative means—figures have captions, web-based images have labels, social media photos are tagged, and so on—these text-based descriptions are only sometimes present, and often do not tell the whole story as they are not designed to do so. Unfortunately, their inclusion is more often the exception than the rule and when available, the description is generally short and imprecise, failing to capture much of the information conveyed by the graphical rendering. One need only to read a few alt tags of graphics on the web to demonstrate how poorly these text descriptions convey what is represented in the graphical depiction. Diversification of design to meet a range of accessibility needs in the digital space can make the information given more valuable to users who must access information in a different way [1]. With more content moving to the electronic space, it is paramount that new solutions for graphical information access are explored in the digital domain.

The aim of this chapter is to discuss some of the factors underlying the graphics access problem faced by people who are BVI and to describe the latest class of technologies and techniques that we believe have the most potential to mitigate the problem. We first characterize the persistent challenges that have perpetuated this long-standing information access issue. We then describe some general approaches developed throughout the years to address this challenge. We specifically focus on the role of touchscreen-based smart devices (e.g., phones and tablets), which our group believes is a promising solution moving forward. We then discuss some of the advantages and disadvantages of these devices and share a few ideological positions that we believe must be advanced if we are to truly address the graphical access challenge in the context of new technology development. This chapter sets forth a clear position on the efficacy of this class of information access technology (IAT) and advocates some paradigm shifts in the way that we think about addressing this vexing access problem. It is also meant to serve as a reference for researchers and developers interested in promoting graphical accessibility via new technologies such as touchscreens.

2. Graphical access for people with visual impairments

2.1. The persistent challenge of graphical access

We start by highlighting an important distinction of nonvisual information access between textual and nontextual information sources. Access to printed, text-based material has largely been solved for BVI individuals owing to significant advances over the past 30 plus years in the development of screen-reading software using text-to-speech engines (e.g., JAWS for Windows [2] or VoiceOver for the Mac and iOS-based devices [3]). Indeed, long before these digital speech-based solutions, the Braille code provided a robust system for conveying alpha-numeric information, as well as other literary, mathematical, and musical symbols that are embossed on hardcopy paper (for a review of the history of Braille, see [4]). The development of dynamic, refreshable Braille-display technologies since the 1970s has provided access to the braille code for real-time access to text, often in conjunction with synthetic speech via the aforementioned screen reader software packages. These hardware and software solutions differ widely in their form factor, connectivity, available features, and languages supported but they share a common shortcoming--they are limited to only providing access to textual information. The crux of the problem is that graphical information is almost exclusively rendered visually. In contrast to accessing text-based material, there is no analogous low-cost, intuitive, and commercially available solution for providing individuals who are BVI with dynamic access to visually rendered graphical content. Compounding the problem, compared to the wealth of knowledge that exists about human visual information processing, there is far less basic research addressing the sensory, perceptual, and cognitive factors that are critical for accurate encoding, interpretation, and representation of graphical information rendered using nonvisual channels such as audition or touch. While earlier studies have evaluated many human information processing characteristics for tangible graphics (i.e., pressure based physical stimuli) [5, 6, 7, 8, 9], these results cannot ensure saliency when adopted for rendering digital graphical elements on touchscreen interfaces (see [10, 11] for discussion). The reason stems from the nature of the stimuli and its mechanism of delivery. Vibrations from flat touchscreens provide no direct cutaneous cues as are afforded with traditional raised tangible graphics, and they trigger different sensory receptors compared to what is used when encoding traditional “raised” tactile graphics or models.

Lack of access to graphical material is more than a mere frustration or hindrance. Indeed, we argue that it represents one of the biggest challenges to the independence and productivity of individuals who are BVI and has had significant detrimental effects on the educational, vocational, and social prospects for this demographic. In support, consider troubling statistics that have estimated that up to 30% of blind people do not travel independently outside of their home [12], that only ~11% of persons who are BVI have a bachelor’s degree [13], and that over 70% of this demographic is unemployed or under-employed [14, 15]. This is not an isolated problem: over 12 million people in the U.S. and 285 million people worldwide are estimated as having some form of significant and uncorrected visual impairment [16]. Unfortunately, this problem is rapidly growing, and the current information gap will likely widen without a tractable solution as: (1) the incidence of people experiencing visual impairment is projected to double by 2030 owing to the aging of our population [17], (2) graphics are increasingly being used as the preferred medium of information exchange, and (3) print-based content is rapidly moving to the digital space. The growing reliance on graphical content is especially evident in educational contexts, where it has been estimated that scientific textbooks and journals contain 1.3 graphical representations per page [18]. The inability for students who are BVI to access this rich graphical content certainly helps explain the particularly low inclusion and success of this demographic in STEM disciplines [19, 20]. Outside of information access in education, the lack of accessibility of many sources of information used in daily life also inevitably contributes to the greater social isolation and depression experienced by individuals who are BVI [21]. Without question, a significant component of improving these statistics (and more importantly, benefitting the lives of BVI individuals at large) involves solving the long-standing information gap caused by lack of access to graphical materials in these domains.

2.2. Current solutions for graphical access

Traditional approaches to creating accessible, tangible graphics, include the use of: (1) a tactile embosser to produce hardcopy raised graphics (e.g., the Tiger embosser [22]); (2) renderings made on heat-sensitive swell paper (e.g., [23]); (3) physical manipulatives that are pinned or velcroed to a board [24]; or more recently, (4) 3D-printed models or manipulatives [25]. Figure 1 provides examples of these materials.

Figure 1.

Examples of traditional methods used to convey graphics (e.g., swell paper, embosser, Wikki Stix).

While these techniques certainly work, they also have several significant shortcomings that limit their efficacy as a robust and broadly applicable solution. The principle drawbacks of these solutions include: (1) the authoring process is often slow and cumbersome and typically requires an individual skilled in creating tactile graphics, (2) the equipment can be prohibitively expensive (e.g., a Tiger embosser can cost between $5,000 and $15,000, see [22]), (3) the technology is based on single-purpose hardware often requiring individuals to use an “army of devices” in their daily life, (4) the output is a static representation that can quickly become obsolete and is neither easy nor quick to update, and (5) the output is largely restricted to a single modality (i.e., touch). A lengthier discussion of these limitations and the challenges they pose can be found in Ducasse, Brock, and Jouffrais’ review of maps for individuals with BVI [26].

Some of these barriers have been addressed through technology development, with the biggest benefit coming from the use of dynamic touch-based interfaces. For instance, a host of refreshable tactual technologies have been developed based on force feedback, refreshable pin arrays, micro fluidics, and moldable alloys. The thorough review by O’Modhrain and colleagues details the pros and cons of each of these approaches [27]. While such technology developments are pushing the boundaries of new haptic technologies as a means for access, these solutions are not widely available nor broadly adopted. This is likely due to several factors including the high cost and lack of commercial availability associated with most of the haptic systems, the in-depth manufacturing and fabrication process required for some of the technologies, and the need for additional hardware that only adds to the host of access devices and technologies already used by BVI persons.

The promise of low-cost, large-format, dot-based graphic displays has been made for decades and some examples are or were commercially available, such as the DotView from KGS Corporation [28] or the Graphic Window by Handytech [29, 30]. Other approaches have exploited auditory solutions, converting the visually-based information into an acoustic format that employs different sonification techniques and auditory parameters (e.g., pitch, loudness, timbre, or tempo) to convey the graphical content [31, 32, 33]. Additional efforts have explored utilizing language-based descriptions to convey graphical information [34, 35]. Auditory and verbal approaches, however, are not optimal as they are based on an interpretive medium that requires cognitive mediation and greater maintenance in attention [36]. Such feedback can also be distracting when accessing information in quiet environments such as classrooms or in a meeting while simultaneously trying to listen to presenters. In addition, we argue that these auditory/linguistic approaches are not as suited to conveying spatial graphics as are touch-based solutions because they do not directly specify spatial relations or provide the necessary kinesthetic feedback that enables spatial organization of information.

The above notable approaches have certainly pushed the possibilities of graphical access, yet it is important to note that simply providing dynamic nonvisual information is not sufficient for conveying and learning graphical materials. In order to effectively meet the larger purpose of what is needed to truly solve the information gap, it is necessary to consider design characteristics that will lead to user acceptance and adoption by the BVI community. These factors include being inexpensive, multi-purpose, multimodal, and readily available. Indeed, many of the solutions discussed above are generally relegated to highly specialized applications and require purpose-built equipment that is designed for specific users, to support specific tasks or needs, in a specific situation or environment. This specificity means that most haptic IATs, even if effective, are too expensive, too limited in their usage applications, too cumbersome, and unduly subject to obsolescence to be viable, long-term information-access solutions for BVI users. There are a growing number of new technologies coming to market that build upon previous work, such as the Graphiti, American Printing House (APH)’s dynamic touch-sensitive pin array [37]; the BLITAB tablet, which is capable of a full page of braille [38]; shapeShift, a refreshable multi-height pin display that can render 3D objects and dynamic movement [39]; and microfluidic-based tablets that are capable of refreshable, raised dots on tablets (e.g., [40, 27]) (see Figure 2). Most of these devices, however, are still in the research phase, and many still suffer from high component costs or reliance on hardware-specific platforms, thereby reducing the likelihood of such devices becoming a mainstream solution.

Figure 2.

New innovative solutions being developed for individuals with BVI: upper left—demonstration of shapeShift (multi-height pin array) [39]; upper right—Graphiti (refreshable pin array) [37]; lower left—BLITAB (refreshable pin tablet) [38]; and lower right—Holy Braille (microfluidic tablet) [40].

While the above innovative approaches have various benefits, we posit that a more broadly adoptable solution is to use technology that: (1) provides direct perceptual access to the graphical content, as is the case via visual access, (2) is (or could be) mass marketed and readily available among end users, and (3) is based on a computational platform that can be leveraged for other functions/activities. We argue that this is best accomplished using dynamic touch-based (or multimodal) displays implemented on smart devices (phones/tablets). We believe that interfaces leveraging direct touch access are critical in solving the graphical access problem as touch has much in common with visual spatial perception, sharing many parallels with the visual pathways in the brain (e.g., [41, 42]). For example, both modalities extract the basic features and spatiality of an object in the environment and integrate this information to form a complete, coherent representation of the object formed in memory. This lends credence to parallel or shared channels in perception [43, 44]. Further, auditory and verbal approaches often involve more cognitive effort and are thus less “perceptual” than touch-based or visually-based information displays [45]. This is not to say that auditory and verbal approaches should be ignored. To the contrary, we believe in synergizing all available modalities, as is done in some capacity on current vibrotactile touchscreen platforms today, and leveraging the appropriate constituent inputs for best supporting the information to be rendered and the task to be performed. While there are various types of haptic displays, each with their own strengths and weaknesses, the position advanced in this paper is that vibrotactile stimulation, when paired with a touchscreen equipped smart device (e.g., phone or tablet) and other output channels, is a highly promising approach for solving the nonvisual graphics access problem. We believe this platform is quickly becoming the de facto gold standard for IAT and offers a solution that has a high likelihood of being accepted and adopted among its end users, which should be the goal of any IAT design.

2.3. Why vibrotactile, touchscreen-based smart solutions?

We have all experienced our phone vibrating in our pocket to indicate an incoming call or to alert us of an upcoming meeting. However, beyond soliciting our attention, providing simple alerts, signaling a confirmation or error, or any number of other instances of secondary or tertiary cuing, people rarely consider the role of vibrotactile feedback as a primary interaction style. On the one hand, this is surprising given the multitude of common interactions we experience that involve vibration in one capacity or another. Consider the slight detents you feel when spinning the scroll wheel on your computer mouse or the volume dial on your car radio, the signal from your electric toothbrush indicating to brush in another location, the rumble from your game controller indicating an undesired behavior, the alert from the buzzer indicating that your party is being summoned at a restaurant, the vibrating seat in your car indicating that you are backing up near an obstruction, and a myriad of other haptic implementations in current technologies that employ vibrotactile cues for nonvisually conveying relevant information. On the other hand, even if informative, this information is usually either an unintended byproduct of an action, (e.g., vibration from approaching an obstacle), or a secondary cue that is part of a primary interface, (e.g., detents that simply provide frictional control over a spinning wheel/dial). They are often not necessary for its function or primary operation. Indeed, rarely is vibrotactile cuing considered as a primary interaction style. In this chapter, we argue that this need not be the case and that vibrotactile feedback is not only vastly underutilized in current interface design but that vibration can serve as a primary mode of user interaction, especially in conditions where visual access is not possible, such as for use by individuals who are BVI or in eyes-free applications (e.g., driving). We now summarize the current state of research on vibrotactile touchscreen displays before sharing four positions our group believes are needed toward addressing the graphical access challenge moving forward.

2.4. Research brief on vibrotactile touchscreen displays

A growing body of research has demonstrated the efficacy of using touchscreen-based devices and vibrotactile or vibrotactile plus auditory information as a primary interaction style for conveying graphical information. Choi and Kuchenbecker provide an excellent review of vibrotactile displays from both a perceptual and technological perspective, summarizing foundational knowledge in this area and providing implementation guidelines for exemplary applications [46]. Brewster and colleagues have also done extensive work exploring tactile feedback, particularly from mobile platforms, and have demonstrated important findings illustrating how structured tactile messages (Tactons) can be used to communicate information using different vibration features [47, 48, 49]. Other research has demonstrated that vibrotactile feedback enables users to complete scrolling and inputting tasks faster on a mobile device compared to interfaces that lack such feedback [50, 51], and can improve textual reading in braille (e.g., [52, 53, 54, 55]). More recent examples have focused on using vibrotactile touchscreen platforms for conveying graphics. A recent project has shown that lines (linear and non-linear) and basic shapes (e.g., circles, triangles, squares) can be successfully interpreted and followed nonvisually through haptic, audio, and haptic-audio access on the touchscreen [5]. Further examples demonstrating the efficacy of this approach were shown when exploring grids [56], graphs [57], maps [58], and nonvisual panning and zooming of large format vibrotactile maps that extended beyond the device’s display [11, 59]. In aggregate, this research clearly illustrates the broad potential of this multimodal approach. Work with a prototype system, called a vibro-audio interface (VAI), based on a commercial tablet, has shown near identical accuracy between use of the VAI and hardcopy tactile stimuli for graph interpretation, pattern detection, and shape recognition [60]. In corroboration, studies by Gorlewicz and colleagues have demonstrated no significant differences in the interpretation of a variety of graphics including bar graphs, pie charts, tables, number lines, line graphs, and simple maps that were presented in embossed form and displayed multimodally on a touchscreen created by Vital [61, 62]. Not only do these studies show the efficacy of this interface, but also that this multimodal platform can achieve similar performance to the gold standard of hardcopy graphics. More recent work by our group has also explored the effect of screen size on the success of these tasks (e.g., tablets versus smaller mobile platforms), and we have shown that performance on a pattern matching task is equivalent across small and large screen sizes [63]. Even though this is a low resolution output mode, these data show that vibrotactile graphics can still be used effectively and accurately when rendered on the smaller form factor of phone-sized smart devices. This is a positive finding, as the majority of BVI users of smart devices are using mobile phones. A recent review by Grussenmeyer and colleagues provides a thorough survey of how touchscreen-based technologies have been used to support information access by people who are BVI and reiterates the prevalent challenges that exist to bring full inclusion to this population [64]. In short, many of these projects suggest promising pathways forward for vibrotactile touchscreens, supported with empirical evidence and positive qualitative feedback of their capacity to convey multimodal information for the interpretation of visual graphics. Moreover, these platforms offer several significant advantages to one-off information access hardware, with the primary benefits being portability, multi-functional use, relative affordability, and widespread adoption and support by the BVI demographic. Indeed, vibrotactile touchscreens provide a robust multimodal framework, which if continually developed in conjunction with advances in touchscreen-based smart devices, has the potential to become the de-facto, universal means for accessing graphics in a multimodal, digital form (for example, see Figure 3). A universal, multimodal platform that is widely available is not only beneficial for the BVI population but extends to many others who benefit from multimodal learning platforms and the brain’s capacity to process both redundant and complementary information from different senses.

Figure 3.

Touchscreens can leverage both auditory and vibrotactile feedback to convey rich information without the need to look at the screen.

2.5. Positions and pathways forward

While there are promising pathways forward, the graphical access challenge for BVI individuals remains a vexing and largely unsolved problem. We argue that the solution requires advancements on several fronts, including ideological, technological, and perceptual. While there has been significant research advancing our understanding of the technological and perceptual pieces (as illustrated in the vibrotactile touchscreen use case presented here), we also want to call the community to consider new ideological perspectives that will advance the field as a whole. Specifically, we present four positions that our group views as necessary for moving closer to addressing the graphical access challenge and that we see as being best addressed by vibrotactile touchscreen technology:

  1. A shift in thinking of assistive technologies as single-purpose, specialized hardware solutions to considering mainstream technologies (and simple adaptations to them) as a first choice for a development platform.

  2. A shift in the traditional approach of retrofitting existing technologies for accessibility to embedding universal design in technologies from the onset.

  3. A shift in using unimodal feedback as a primary mode of interaction to leveraging all modalities available for primary interactions.

  4. A shift in designing based on features and capabilities to a principled design approach driven by end user needs that is scoped by practical guidelines supporting efficient and effective usage/implementation.

We briefly elaborate on these positions below.

2.6. Ideological requirements

2.6.1. A shift from using single-purpose, specialized hardware solutions to considering mainstream, multi-use technologies

To truly advance this class of technology, we need a shift from thinking of assistive technologies as being specialized, single-purpose hardware/software supporting a single (niche) user group to being incorporated in a commercial platform supporting multiple functions that can be used by a broad range of people. Of course, specialized equipment is necessary in certain instances—if you want a hardcopy page of braille or to emboss a physical tactile map, you will need a specialized Braille/graphics embosser. However, in many instances, nonvisual access to information can be delivered using standard commercial devices, which has the advantage of vastly decreasing the development costs and purchase price, thereby increasing actual adoption by BVI users. One example of this is text-to-speech engines, which provide access to visually-based textual information on the screen via speech output. While an intervening software layer is needed to efficiently analyze the video model and represent this information in an intuitive manner for auditory output, the requisite hardware involving a sound card and speaker output is already available on almost all commercial devices. Adding speech input requires a mic, which is also on all smart devices, as is embedded speech-to-text software. In the spirit of this chapter, this idea can be extended to include tactile feedback. Many current touchscreen displays have vibration capabilities in some form. Using the standard vibration motor can open pathways to a whole new universe of haptic information that can augment, complement, or completely replace other modes of feedback.

As such, the traditional notion of developing highly specialized assistive technology for specific groups of users (e.g., BVI users) as a completely separate process from mainstream technology needs to be reconsidered. This shift is more about a mindset than the technology itself. That is, designers of assistive technology should start with the goal of using commercial hardware and existing software platforms when possible. They should first consider how to creatively use the built-in components of the system and the existing feature set of the interface to solve the problem before resorting to the use of specialized one-off hardware or software development. Using existing hardware, computational platforms, sensors, and other components when possible and making the access layer as implemented in software as possible betters the overall commercial product while also reducing the price of developing accessible technologies at large.

2.6.2. A shift from retrofitting existing technologies to embedding universal design from the onset

We posit that mass market companies (and researchers) developing mainstream products should embrace the notion of universal and inclusive design in their R&D process, as this not only results in products that will benefit the greatest number of users (thereby increasing their pool of potential customers) but will also have many unintended positive results that will better support core users. Consider Apple, who developed a completely inaccessible product (the iPhone) in 2007. Although touchscreen technology has been around for a long time, Apple’s 2007 introduction of the iPhone brought them to the mass market. Initially, this was considered a huge set back to accessibility for blind consumers, as this new disruptive technology was based around a flat, featureless glass surface with no screen reader to provide text-to-speech. As such, blind users were completely unable to access the native input or output functions with these devices. However, in 2009, Apple released the iPhone 3GS, which included the VoiceOver screen reader and a host of associated interactive gestures as part of the native operating system (iOS 3.0). Overnight, this release propelled Apple from a company who had ostensibly abandoned their long history supporting BVI users to the leader of mobile accessibility. TalkBack, the Android analog to VoiceOver, was also released in 2009, though it has been slower to gain momentum among the BVI community compared with iOS-based devices. Almost immediately, the iPhone became one of the most accessible pieces of assistive technology even though it was not designed to be an assistive technology in and of itself. For example, VoiceOver was designed to assist BVI users on the iPhone, but it was built-in to the native OS, rather than requiring an expensive, separate, stand-alone software package, as is the traditional model of selling screen-reader software. In addition to this universal design aspect, VoiceOver’s inclusion had many unintended benefits to other markets that would have not been realized if it had not been included. For instance, self-voicing benefits people using English as a second language, it helps those with learning disabilities, and it is used regularly by individuals for proof reading. This revealed further pathways, where app developers leveraged features like the Siri personal assistant and other built-in sensors to develop apps that support accessibility in a wide variety of applications. Examples of these include apps that can read barcodes, can tell you about your surroundings, can describe a picture to you, can read money to you, and so on [65]. The exponential growth and broad-based proliferation of touchscreen-based devices has been an amazing boon for access technology. For the first time, it is now possible to incorporate most of the expensive, stand-alone devices that were previously required for information access, as fully accessible apps on the phone. The rapid development of apps harnessing this power, mobile flexibility, diversity of usage scenarios, and user groups means that all roads (at least from a computing standpoint) lead to incorporating some aspect of these technologies, and this has broad-based benefits that extend across demographics. Further, the incorporation of multimodal feedback—visual, aural, and touch—expands the possibilities and capabilities that can be achieved through these new developments. To maximize the broader impacts possible when incorporating inclusive/universal design, we strongly encourage developers to leverage all communication channels available from the onset of the design and implementation process.

2.6.3. A shift from relying on unimodal feedback to leveraging all modalities available for primary interactions

Many hardware platforms today rely heavily on unimodal feedback. Even if they have multimodal capabilities, many of these multimodal interactions are significantly underutilized and sparsely implemented. Additionally, many of them are only implemented as a means for input or output, but not both, with additional modalities being used only for secondary or tertiary cueing. For example, touchscreens currently can provide visual, auditory, and vibrotactile information, yet they are generally only thought of as visual input/output interfaces. Despite having built-in vibration capabilities, vibrotactile cues are usually only used for conveying information about alerts or confirmation of an operation, not as a primary mode of extracting key information during user interactions or as input to the system. Acknowledging and enabling multimodal information as a primary means of input and output interaction is an important design consideration moving forward. This chapter provides several examples of research illustrating the benefits of leveraging all modalities available on touchscreens, with a specific focus on its potential to address the graphical access problem for BVI individuals. We note that there are likely several other unintended positive outcomes that would result should such an approach be adopted with touchscreens and other technologies if multimodal capabilities were leveraged equally in the user experience.

2.6.4. A shift from designing based on interface features to designing based on end-user needs

A critical first step here is overcoming the engineering trap, i.e., designing based on maximizing features and developer interests. The better approach is adopting a principled user-based design philosophy from the onset that considers the most relevant features ensuring the greatest functional utility for the end-user. The context of the technology implementation, how it will be deployed and used, how it compares to current tools, and where it falls short or excels are all worthy investigations that need to be explored. Most importantly, adhering to standards and guidelines to scope when and where a given technology is (or is not) appropriate are necessary. Success here often requires interdisciplinary research that cuts across several domains, involves multiple stakeholders in the process, and incorporates iterative end user assessment and participation. While advancements in technology will certainly open up new pathways, we, as designers, must also be open and cognizant to the reality that more advanced technology does not necessarily mean an immediately better solution. New technologies and advancements should be probed from multiple perspectives and should be situated and contextualized in practical use case scenarios that consider known perceptual and cognitive capabilities. While this approach may not be the fastest or the easiest path, it is certainly the one that will best inform when and how a new product will be most successful and when and where it will not work. Our group has come together to do this for vibrotactile touchscreens, and we are encouraged by the growing number of teams who are also adopting this design approach. We acknowledge that this user-centered, needs-based, principled design model takes a great deal of time and resources, and that all technology developments begin with feasibility studies. We are hoping to encourage communities of researchers and technology developers to come together to extend these inquiries and tackle this challenge from multiple perspectives, with the shared goal of driving it to its full potential. We further encourage researchers to disseminate and share their work, and when possible, to open SDK’s, API’s, and hardware platforms for community access, contribution, and growth.

3. Conclusions and future research

We believe that a principled solution to graphical access, designed from the onset to maximize the perceptual and cognitive characteristics of nonvisual and multimodal information processing, while also meeting the most pressing information access needs of the target demographic, could have broad and immediate societal impact. In this chapter, we highlight both the challenges and the vast potential of touchscreen-based smart devices as a platform for alleviating the graphics accessibility gap. We review the state of the art in this line of research and present positions and pathways forward for addressing the graphical access challenge from multiple perspectives. We do this specifically from an ideological standpoint, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain. Despite the need for more research, we see vibrotactile touchscreen platforms as a promising springboard for bringing multimodal, nonvisual graphical access into the hands of individuals everywhere. Because of their portability, availability, capabilities, and wide adoption among the BVI community, multimodal touchscreen interfaces are poised to serve as a model for universally designed consumer technologies that are also effective assistive technologies. These multimodal interfaces are also poised to close the accessibility gap while serving as a model for how we think about accessibility in the context of a new technological era.

Acknowledgments

This material is based upon work by the National Science Foundation Grant No. 1425337 and Grant No. 1644471. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Conflict of interest

Jenna Gorlewicz is also Co-Founder and President of JLG Innovations, LLC. Hari Palani is also Co-Founder and President for Unar Lab, LLC. Nicholas Giudice is also Co-Founder and Director of Research for Unar Lab, LLC.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Jenna L. Gorlewicz, Jennifer L. Tennison, Hari P. Palani and Nicholas A. Giudice (December 11th 2018). The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward, Interactive Multimedia - Multimedia Production and Digital Storytelling, Dragan Cvetković, IntechOpen, DOI: 10.5772/intechopen.82289. Available from:

chapter statistics

126total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Training Reading, Writing and Spelling Fluency: Centre-Periphery Dissemination through Interactive Multimedia

By Charles Potter

Related Book

First chapter

Introductory Chapter: Computer Simulation

By Dragan Cvetković

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us