Open access peer-reviewed chapter

Vision-Based Tactile Paving Detection Method in Navigation Systems for Visually Impaired Persons

Written By

Anuar Bin Mohamed Kassim, Takashi Yasuno, Hiroshi Suzuki, Mohd Shahrieel Mohd Aras, Ahmad Zaki Shukor, Hazriq Izzuan Jaafar and Fairul Azni Jafar

Submitted: 15 June 2018 Reviewed: 02 July 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.79886

From the Edited Volume

Advances in Human and Machine Navigation Systems

Edited by Rastislav Róka

Chapter metrics overview

1,254 Chapter Downloads

View Full Metrics

Abstract

In general, a visually impaired person relies on guide canes in order to walk outside besides depending only on a tactile pavement as a warning and directional tool in order to avoid any obstructions or hazardous situations. However, still a lot of training is needed in order to recognize the tactile pattern, and it is quite difficult for persons who have recently become visually impaired. This chapter describes the development and evaluation of vision-based tactile paving detection method for visually impaired persons. Some experiments will be conducted on how it works to detect the tactile pavement and identify the shape of tactile pattern. In this experiment, a vision-based method is proposed by using MATLAB including the Arduino platform and speaker as guidance tools. The output of this system based on the result found from tactile detection in MATLAB then produces auditory output and notifies the visually impaired about the type of tactile detected. Consequently, the development of tactile pavement detection system can be used by visually impaired persons for easy detection and navigation purposes.

Keywords

  • tactile pavement
  • image recognition
  • navigation
  • guide cane
  • visually impaired person

1. Introduction

Accessibility is one of the main problems usually associated with disabled people [1]. Physically impaired people who are using the wheelchair have difficulties in going to their desired destination when they are faced with stairs, irregular roads, etc. The physically impaired people need a flattened surface or lift/barrier-free elevator to overcome the stairs or irregular surfaces. Besides, visually impaired people have problems of accessibility if there is no tactile pavement to guide them to their desired destination. The most significant problem or barrier is lack of infrastructure and safe mobility device for guiding the visually impaired in their everyday lives [2]. Hence, research has been carried out to develop and construct devices and infrastructure to guide them to their desired destination safely and without any collision [3].

The implementation of technology into the life of disabled people has the tendency to intensify their ability to have a more involved social life with the community around them. It could increase their quality of life and reduce the isolation problem of disabled people by increasing independence in their lives [4, 5]. This type of technology is called as assistive technology. Assistive technology has various meanings and purposes. As commonly known, assistive technology is a device or tool that can be used for supporting and helping disabled or elderly people. Besides, there are some categories of assistive technologies for different purposes such as rehabilitation, social assisting, etc. Some assistive technologies, which are developed to help disabled people, are well documented [6, 7, 8, 9]. However, there are also some ethical issues that need to be considered while designing assistive technologies, which could benefit the disabled and elderly people [10, 11, 12, 13].

In addition, the development of technologies that can help the visually impaired people has also emerged over the decades, which started from the Braille code typewriter to help them write and read. By using Braille with voice recognition system as the input interface, they can get to know the latest information around them. Currently, the usage of Braille is not only applied for typewriters, but there are also some research that have been done for implementing the Braille code on mobile phones, etc. [14, 15, 16], so that visually impaired people can use mobile phones as well as smartphones. Visually impaired and elderly people also need to be in line with the current technology because of the fast evolution in the communication era lately. There are also many software or applications inside the smartphone that can be used to help the visually impaired people, although they cannot see them. In addition, applying ubiquitous technologies such as in smartphones can make the visually impaired people understand and ‘see’ their surroundings [17, 18].

Besides, there are also some assistive technologies which are traditionally used by the visually impaired people such as a white cane, screen reader, etc. The concept of wearable assistive technology has also been drastically researched since the fabrication of a small device is possible now. The development of the wearable device also meets the requirements of the design challenges for assistive technology such as real-time guidance, portability, power limitations, appropriate interface, continuous availability, no dependence on infrastructure, low-cost solution, and minimal training. Therefore, disabled people such as visually impaired people are able to wear it while traveling outside [19, 20, 21, 22].

In order to start the research endeavors, some literature reviews need to be conducted. This is because the current research needs to be understood first before the direction of the research is determined. Hence, some research studies have been reviewed, especially in assistive technology and rehabilitation study that aimed to help the visually impaired people to increase their quality of life (QOL) by leading more independent lives.

Advertisement

2. Navigation system

Navigation is a common problem for visually impaired people since they cannot travel by themselves. They cannot visually and freely decide the direction, in which they need to go since the information surrounding them cannot be obtained. Therefore, there are some researches and innovation works conducted to support and assist the visually impaired people to achieve self-independence when traveling at an indoor environment as well as outdoor environment. Therefore, some technologies need to be included in navigation systems in order for the system to be successfully executed. The technologies, which are required, include localization, path planning, error detection and correction, etc. Meanwhile, there are some localization technologies, which are focused on by some researchers. The localization technologies such as infrared data association (IrDA), radio frequency identification (RFID), near field communication (NFC), Bluetooth, light emitting method, Wi-Fi, etc. have been developed to help the visually impaired people to move while indoors with contextual information or sound navigation [23].

However, these methods have some limitations when used in outdoor environments. Therefore, the usage of global positioning system (GPS) devices can also help to guide the visually impaired people in an outdoor environment. GPS is a satellite-based system that provides the location of the GPS device by indicating the longitude and the latitude of the location. Some researchers have proved that the GPS cannot function properly in an indoor space and they have presented the solution of GPS by using the IrDA technology, which works as a detector to guide visually impaired people in an indoor environment [24]. On the other hand, Lisa et al. combined the GPS that have developed in Drishti system and the ultrasonic sensors to be used for outdoor and indoor navigation [25]. However, one of the problems with GPS is accuracy; the accuracy of current GPS devices is about 5–10 m. The accuracy can also become worse when the measurement is done near tall buildings [26]. The measurement error is too big and very dangerous to be used by the visually impaired people since the location given by the GPS can guide them to the center of the road.

Furthermore, blind navigation system (BLI-NAV) is a navigation system, which consists of GPS receiver and path detector, designed for visually impaired people. Both devices are used to detect the user’s location and determine the shortest route to the destination. Voice command is given throughout the travel. Path planning algorithm is used to determine the shortest distance from the start point to endpoint, together with the path detector. Moreover, the user is able to avoid obstacles while traveling [27]. This system gave better results in real-time performance and improved the efficiency of visually impaired travelers at an indoor environment.

On the other hand, pocket-PC–based electronic travel aid (ETA) is proposed to help visually impaired people to travel at an indoor environment. Pocket-PC will alert the user when they are near the obstacles through a warning audio [28]. An ultrasonic navigation device for visually impaired people has been designed. The microcontroller built in the device can guide the user in terms of which route should be taken through a speech output. Besides, the device helps to reduce navigation difficulties and detects obstacles using ultrasounds and vibrators. An ultrasonic range sensor is used to detect surrounding obstacles and electronic compass is used for direction navigation purpose. A stereoscopic sonar system is also used to detect the nearest obstacles and it feeds back to tell the user about the current location [29].

In addition, a visually impaired assistant navigation system that can help visually impaired people navigate independently at an indoor environment has also been developed [30]. The system provides localization by using a wireless mesh network. The server will do the path planning and then communicate using the wireless network with the portable mobile unit. The visually impaired people can give commands and receive the response from the server via audio signals using a headset with a microphone [31]. A proposed RFID technology in order to design the navigation system by providing information about their surroundings has also been developed. The system uses the RFID reader, which is mounted on one end of the stick to read the transponder tags that are installed on the tactile pavements [32]. At the same time, the research on RFID network can help to determine the shortest distance from the current location to the destination. Besides that, the system can help to find the way back if they lost their direction and recalculate a new path [33].

In addition, INSIGHT is an indoor navigation system to assist the visually impaired people to travel inside the buildings. The system uses RFID with Bluetooth technology to locate the user inside the buildings. Personal digital assistant (PDA) such as a mobile device is used to interact with the INSIGHT server and provide navigation information through voice commands. The zone that the user has walked on will be monitored by the system. The system will notify the user if the user travels in the wrong direction [34].

Advertisement

3. Development of navigation system for the visually impaired

In order to develop the navigation system, which will benefit the visually impaired people, a total navigation system that includes a path planning system, RFID detection system, and obstacle detection system is needed. However, in this chapter, the developed navigation system is only focused on the tactile pavement detection system using the image recognition method. Moreover, implementation of digital compass is used to guide visually impaired people when traveling [35]. Figure 1 illustrates the system architecture by using ZigBee wireless networks for the communication between the server/laptop and the developed navigation device.

Figure 1.

System architecture of developed navigation system including server/laptop.

We developed the ZigBee network in order to connect and monitor the developed navigation device when the experiment is conducted. The ZigBee network acts as the center of data transferring between the navigation device and the server/laptop. The movement of user will be shown to the map processing system on the server/laptop respectively. Hence, the user’s current position to the desired position will be displayed on the map based on a generated route. The map system then identifies the address of the target. Concurrently, the RFID reader/writer module will read the RFID tags on the tactile paving or floor. The data of the RFID tags of the current position and the address is sent for map processing.

Next, voice guidance commands will also be given based on the route, which has been generated, to the user through an earphone. The earphone connection is based on a Bluetooth connection. The server/laptop will send the voice guidance and user position will also be updated at the same time. Path recalculation will also be done again and voice guidance is produced if the user takes the wrong path from the recommended path. The benefit of the system is when the user needs to take a corner turning, the digital compass will compare the angle and ensure the user takes the corner effectively without hitting the nearby obstacles. The server receives data from ZigBee network and suggests mounting at fixed locations inside the buildings. The server must be updated and the information of the destinations and objects needs to be stored inside the database with respect to the map system.

In order to optimize the functionality of the developed navigation device for guiding the visually impaired people in the correct direction throughout the travel path, the experimental setup to evaluate the accuracy of the digital compass is set. The orientation or direction is attained by using a digital compass mounted on the developed electronic cane. The digital compass is connected to the Arduino microcontroller to obtain the analog signal and convert it back to the digital signal by using the onboard analog-digital converter (ADC). The digital signal will be displayed on the serial monitor of the Arduino microcontroller and the digital compass can be tuned accurately. The digital compass is fixed at the certain point where the RFID tag has been mounted to ensure the digital compass is always pointing to the north.

Advertisement

4. Tactile pavement detection system using image recognition

Figure 2 shows the system configuration between personal computer with MATLAB, web camera, Arduino microcontroller board, XBee transceiver, voice module WTS 020, and speaker in order to give auditory warning to visually impaired people after the implementation of vision-based tactile detection method. After a coding has been inserted into the Arduino microcontroller, it will be ready to receive signals from MATLAB, and then send commands to the voice module to play selected audio files. In order to produce auditory output, a voice module will be used to play the required audio file when commands are executed. Figure 3 shows the actual hardware, which has been developed in order to validate the performances of the proposed vision-based tactile pavement detection system.

Figure 2.

System configuration for vision-based tactile paving detection system.

Figure 3.

Developed blind navigation system hardware.

From the illustration, which is shown in Figure 3, a web camera is mounted on the center of the electronic cane. A distance between the webcam to the tactile paving is about 50 cm. The web camera is connected to the personal computer, which has been installed with the MATLAB software through XBee wireless communication. The personal computer will process the image, which has been captured through the web camera by using the proposed tactile pavement detection system. After the shape of the tactile pavement has been successfully determined by the proposed tactile pavement detection system, two types of voice guide will be given, which are WARNING and DIRECTION, through user’s Bleutooth headphone. The result of the detection will be sent through the XBee transceiver to guide the cane’s transceiver in order to activate the voice module. The voice guidance will be given through Bluetooth wireless communication.

This vision-based system consists of five main phases. The first part is to input the image containing the tactile pavement with the warning tactile and directional tactile. The second part is the preprocessing of the input images, which includes the filtering of the noises for the tactile detection in the image. The third part is to extract and determine the area and perimeter of the connected components detected in the image. The fourth part is to determine the metric for the connected components by using the area calculation algorithm of the detected components. The last part is to produce the accurate audio output to the visually impaired people. A process flowchart regarding the overall process of this system is shown in Figure 4.

Figure 4.

Overall process flowchart for vision-based tactile pavement detection system.

4.1. Input image

A webcam/camera will be used to capture the image that contains the pattern of tactile pavement. It will be loaded into MATLAB for further preprocessing to successfully detect any possible tactile shapes. Figure 5(a) and (b) shows the images of tactile paving, which are warning tactile and directional tactile.

Figure 5.

Input tactile image. (a) Warning tactile and (b) direction tactile.

4.2. Preprocessing

This phase of the whole process is to filter the image actually required to be detected in MATLAB. Several steps are shown in Figure 4 in preprocessing, which are important to achieve the goal of the vision-based system.

4.3. Color image to grayscale image

The previous image is an RGB image, which is a colored image. The brightness levels of the red (R), green (G), and blue (B) components are each represented as a number from decimal 0 to 255. Therefore, the RGB image has to be converted to black-and-white image for the ease of processing. The lightness of the gray color is directly proportional to the number representing the brightness levels of the primary colors. Black is represented by value for each R, G, and B is 0. Meanwhile, white is represented by value of each R, G, and B is 1. The converted grayscale image is shown in Figure 6.

Figure 6.

Image conversion process. (a) Grayscale image and (b) binary image.

4.4. Grayscale image to binary image

This process will change the grayscale image to a binary image, which is an image with only black and white pixels in it. Figure 6(a) shows the grayscale image after the conversion process. The pixel values that exist in this format of image are only 0, which is black in color, and 1, which is white in color. Figure 6(b) shows the resulting image of the binary image after it has been through the threshold method when the threshold has been set to the value of 0.5.

4.5. Connecting pixels in image

In this phase, preprocessing is used to identify the connected components inside the binary image. Figure 7 shows the connected components algorithm, which is applied in order to identify the connected components by using a group of binary symbols. After the connected components have been identified, the next process is to fill the “holes” inside any connected pixels in the image automatically using MATLAB, in an attempt to make the connected pixels look more obvious. Therefore, further detecting processes for the tactile shapes can be performed easily. The processed image in which the “holes” have been filled with white color is shown in Figure 8.

Figure 7.

Connected components algorithm.

Figure 8.

Image with filled holes inside connected components.

4.6. First round of image filtering

This process has been implemented to remove all objects that are connected to the edge, in which these noises could cause problems for tactile detection in the image. Furthermore, pixels that are connected to the edge are more than likely to be useless in gaining any information that is required, therefore removing them would work as filtering. Figure 9 shows the image with a cleared border.

Figure 9.

Filtering process. (a) First round filtered image and (b) second round filtered image.

4.7. Second round of image filtering

As shown in Figure 9(a), there are still far too many noises present in the image, even after the first round of image filtering. Therefore, another method has been implemented to filter the image of the noises even more. This method is to set a threshold value for the pixel size, and anything that has a pixel value below the threshold value will be removed. Figure 9(b) shows the image after it has removed all components that have a pixel value of less than 1000 pixel2.

4.8. Extract required parameters from connected components

In this phase, after the image is filtered by preprocessing, some parameters are required from the connected pixels/components that are left. The parameters required by the tactile detection algorithm are area and perimeter of the left image. A function in MATLAB will be used to determine these required parameters. Figure 10 shows the pieces of area, which is recognized, and each of the parameters found by the function on the connected pixels is shown in Table 1(a)–(f).

Figure 10.

Parameters (area, perimeter, centroid) of the connected pixels.

(a) Parameter for no. 1(d) Parameter for no. 4
Area1919Area1951
Centroid[146.1730, 176.4815]Centroid[348.2952, 242.4567]
Perimeter163.7317Perimeter165.6812
(b) Parameter for no. 2(e) Parameter for no. 5
Area1980Area1984
Centroid[213.070, 242.8086]Centroid[144.9057, 311.9995]
Perimeter173.8234Perimeter170.6102
(c) Parameter for no. 3(f) Parameter for no. 6
Area1988Area1993
Centroid[280.6459, 242.7938]Centroid[280.9910, 310.4752]
Perimeter177.6812Perimeter173.0955

Table 1.

Metric table.

4.9. Determining the metric

This phase will be the main part that will decide whether the connected components/pixels in the image are the potential image containing tactile or not. The metric m is determined by using Eq. (1).

E1

where m indicates the metric, A indicates the area, and p indicates the perimeter. Therefore, it is important to obtain the required parameters, which are the area and perimeter of the connected pixels before this phase. MATLAB will calculate the metric from the parameters obtained earlier using Eq. (1). Table 2 shows the result of the metrics of the connected components/pixels.

No.123456
Area191919801988195119841993
Metric0.8990.8230.7910.8930.8570.836

Table 2.

Calculation results of metric for each area.

4.10. Producing auditory output based on metric

After the metric for each connected component/pixel has been calculated, the “shape” for each will be determined. In the results, it is proven that the connected pixels which have metric values in the range of 0.85–1.0 will most likely be a circle, representing the warning tactile. In contrast, the connected pixels that have metric values in the range of 0.15–0.30 will most likely be a bar, which represents the directional tactile. Figure 11(a) and (b) shows the results of the metric for warning and directional tactile images. After the results of metric values have been calculated, the system will then send a signal to the auditory output system, and notify the visually impaired people about what have been detected.

Figure 11.

Final image results. (a) Warning tactile and (b) direction tactile.

Figure 12 shows the overall process flow system hardware for the auditory output. In this case, when the warning tactile has been detected (metric value in range from 0.85 to 1.0) in MATLAB, MATLAB will send a signal to the voice module to have an auditory output saying WARNING via Arduino microcontroller, and if the directional tactile has been detected (metric value in range from 0.15 to 0.30), the auditory output would be DIRECTION. Prior to sending signal to the voice module from MATLAB via Arduino microcontroller, a serial communication between MATLAB and Arduino must first be made. After the serial communication has been established, only then will Arduino microcontroller be able to receive any signals from MATLAB when the command is being given.

Figure 12.

System hardware control flowchart.

After the metric has been determined, a certain signal will be sent to the Arduino microcontroller, where coding is already uploaded to the Arduino microcontroller board beforehand through the Arduino I/O interface. There are three cases where the metric values are in the range 0.15–0.30, 0.85–1.0 and the range other than the mentioned metric values. For example, when the metric values of range 0.15–0.30 are found, MATLAB will send a signal to indicate DIRECTION, and Arduino microcontroller will receive it, and execute the next command. It is the same for two other cases, where ?WARNING? voice signal will be sent if metric values of range 0.85 to 1.0, while ?ERROR? voice signal will be sent if none of this two metric range are determined.

Advertisement

5. Results and discussions

In order to validate the effectiveness of the developed vision-based tactile detection method, an experiment is conducted to recognize a variety of shapes by using the proposed detection algorithm. This experiment is conducted to prove that the proposed detection algorithm can compare different shapes such as circle, bar, eclipse, square, triangle, and diamond. These shapes have been captured through web camera and downloaded from Internet. A metric calculation is used to detect the shapes. The metric in the coding works by calculating any connected component’s area and perimeter in a binary image after preprocessing, and then computing it using Eq. (1). After the metric has worked on the connected components, it will give a certain range of values for different shapes detected.

Table 3 shows the detection results of a variety of shapes by using the proposed tactile detection algorithm. From Table 3 results, the range of the metric value for each shape is confirmed by using the vision-based tactile detection algorithm. These metric values will be the benchmark value for each shape in order to differentiate the image shape. However, there are some shapes, which are having similar metric values such as circle, eclipse, and square. These analysis results will be used to improvise the current detection algorithm, making the system better and more robust to different types of detection environment of the tactile paving.

Table 3.

Detection results of variety of shapes.

The detection of these various shapes will be used in order to recognize and differentiate the different shapes of items such as leaf, construction pavement, box, various shapes of papers/ garbage, etc. The detection of these various shapes will be used in order to recognize and differentiate the different shapes of items such as leaf, construction pavement, box, various shapes of papers/ garbage, etc. which usually covered the tactile pavement in real environment. A higher accuracy of detection system is needed to give higher reliability and give confidence to the visually impaired person when using the tactile pavement detection system to travel safely. Therefore, the result of the proposed detection system is very important for the blind navigation system, which is proposed in Figure 1.

Advertisement

6. Conclusions

In this chapter, the performance of a developed vision-based tactile detection algorithm, which was used to recognize the shape of tactile pavement for navigation purpose, was evaluated. The vision-based tactile detection algorithm was proposed and the experimental study on effectiveness of the detection algorithm by using five phases, which are load image, preprocessing, parameters extraction, metric calculation, and auditory output, was conducted. The proposed vision-based tactile paving detection system was also confirmed to be functioned in order to differentiate variety of shapes such as circle, bar, eclipse, square, triangle, and diamond. All the metric values could be applied as benchmark metric values for the next step of development of visually impaired navigation system.

Advertisement

Acknowledgments

This research is a collaboration project between Graduate School of Advanced Technology and Science, Tokushima University, and Centre for Robotics and Industrial Automation, Universiti Teknikal Malaysia Melaka.

References

  1. 1. Foley A, Ferri BA. Technology for people, not disabilities: Ensuring access and inclusion. Journal of Research in Special Educational Needs. 2012;12(4):192-200
  2. 2. Bujacz M, Baranski P, Moranski M, Materka A. Remote mobility and navigation aid for the visually disabled. In: 7th International Conference on Disability, Virtual Reality and Associated Technologies with Artabilitation; 8-11 September 2008; Portugal. pp. 263-270
  3. 3. Strumillo P. Electronic interfaces aiding the visually impaired in environmental access, mobility and navigation. In: 2010 3rd International Conference on Human Systems Interactions; 13-15 May 2010; Rzeszow. Poland. pp. 17-24
  4. 4. Lamourex EL, Hassell JB, Keeffe JE. The determinants of participation in activities of daily living in people with impaired vision. American Journal of Ophthalmology. 2004;137(2):265-270
  5. 5. Scherer MJ. Living in the State of Stuck: How Assistive Technology Impacts the Lives of People with Disabilities. Fourth ed. Cambridge, MA: Brookline Books; 2005
  6. 6. Leventhal JD. Assistive devices for people who are blind or have visual impairments. In: Assistive Technology. Gaithersburg, MD: Aspen Publishers; 1996. pp. 125-143
  7. 7. Cheverst K, Clarke K, Dewsbury G, Hemmings T, Kember S, Rodden T, Rouncefield M. Designing assistive technologies for medication regimes in care settings. Universal Access in the Information Society (UAIS). 2003;2(3):235-242
  8. 8. Steel EJ, De Witte LP. Advances in European assistive technology service delivery and recommendations for further improvement. Technology and Disability. July 2011;23(3):131-138
  9. 9. Mountain G. Using the evidence to develop quality assistive technology services. Journal of Integrated Care. 2004;12(1):19-26
  10. 10. Sharkey A, Sharkey N. Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology. 2012;14(1):27-40
  11. 11. Perry J, Beyer S. Ethical issues around telecare: The views of people with intellectual disabilities and people with dementia. Journal of Assistive Technologies. 2012;6(1):71-75
  12. 12. Dewsbury G, Clarke K, Hughes J, Rouncefield M, Sommerville I. Growing older digitally: Designing technology for older people. In: Inclusive Design for Society and Business; 25-28 March 2003; London. London: Helen Hamlyn Institute; pp. 57-64
  13. 13. Leonard VK, Jacko JA, Pizzimenti JJ. An investigation of handheld device use by older adults with age-related macular degeneration. Behaviour & Information Technology. 2006;25(4):313-332
  14. 14. Jayant C, Acuario C, Johnson W A, Hollier J, Ladner R E. VBraille: Haptic Braille perception using a touch-screen and vibration on mobile phones. In: 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS); 25-27 October 2010; Orlando, FL, USA. pp. 295-296
  15. 15. Azenkot S, Fortuna E. Improving public transit usability for blind and deaf-blind people by connecting a braille display to a smartphone. In: 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’10); 25-27 October 2010; Orlando, FL, USA. pp. 317-318
  16. 16. Johnson KL, Dudgeon B, Amtmann D. Assistive technology in rehabilitation. Physical Medicine and Rehabilitation Clinics of North America. 1997;8(2):389-403
  17. 17. Wang H, Zhang Y, Cao J. Ubiquitous computing environments and its usage access control. In: First International Conference on Scalable Information Systems (INFOSCALE '06); May 30-June 1 2006; p. 10
  18. 18. Vergados DD. Service personalization for assistive living in a mobile ambient healthcare-networked environment. Personal and Ubiquitous Computing. 2010;14(6):575-590
  19. 19. Dakopoulos D, Bourbakis NG. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Transactions on Systems, Man, and Cybernetic, Part C: Applications and Reviews. 2010;40(1):25-35
  20. 20. Zhang J, Lip CW, Ong SK, Nee AYC. Development of a shoe-mounted assistive user interface for navigation. International Journal of Sensor Networks. 2012;9(1):3-12
  21. 21. Tsai D, Morley JW, Suaning GJ, Lovell NH. A wearable real-time image processor for a vision prosthesis. Computer Methods and Programs in Biomedicine. 2009;95:258-269
  22. 22. Cardin S, Thalmann D, Vexo F. A wearable system for mobility improvement of visually impaired people. The Visual Computer. February 2007;23(2):109-118
  23. 23. Interface. Research: Latest GPS and Indoor Measurement. Japan: CQ Publisher; 2013 in Japanese
  24. 24. Wise E, Li B, Gallagher T, Dempster AG, Rizos C, Ramsey-Stewart E, Woo D. Indoor navigation for the blind and vision impaired: Where are we and where are we going? In: 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN); 13-15 November 2012; pp. 1-7
  25. 25. Ran L, Helal S, Moore S. Drishti: An integrated indoor/outdoor blind navigation system and service. In: Proceedings of 2nd IEEE Ann. Conf. on Pervasive Computing and Communications (PerCom 2004); March 2004; pp. 23-30
  26. 26. Sarfraz M, Rizvi SAJ. Indoor navigational aid system for the visually impaired. Second International Conference on Geometric Modeling and Imaging (GMAI). 2007:127-132
  27. 27. Santhosh SS, Sasiprabha T, Jeberson R. BLI-NAV embedded navigation system for blind people. Recent Advances in Space Technology Services and Climate Change (RSTSCC). 2010:277-282
  28. 28. Choudhury MH, Aguerrevere D, Barreto AB. A pocket-PC based navigational aid for blind individuals. In: 2004 IEEE Symposium on Virtual Environments, Human-Computer Interfaces and Measurement Systems (VECIMS). 2004. pp. 43-48
  29. 29. Bousbia-Salah M, Redjati A, Fezari M, Bettayeb M. An ultrasonic navigation system for blind people. In: 2007 IEEE International Conference on Signal Processing and Communications ICSPC. 2007. pp. 1003-1006
  30. 30. Shamsi M, Al-Qutayri M, Jeedella J. Blind assistant navigation system. In: 2011 1st Middle East Conference on Biomedical Engineering (MECBME). 2011. pp. 163-166
  31. 31. Chumkamon S, Tuvaphanthaphiphat P, Keeratiwintakorn P. A blind navigation system using RFID for indoor environments. In: 5th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology. Vol. 2. 2008. pp. 765-768
  32. 32. Kassim AM, Jaafar HI, Azam MA, Abas N, Yasuno T. Design and development of navigation system by using RFID technology. In: 3rd IEEE International Conference on System Engineering and Technology (ICSET). 2013. pp. 258-262
  33. 33. Kassim AM, Shukor AZ, Zhi CX, Yasuno T. Exploratory study on navigation system for visually impaired people. Australian Journal of Basic and Applied Sciences. 2013;7(14):211-217
  34. 34. Ganz A, Gandhi SR, Wilson C, Mullett G. INSIGHT: RFID and bluetooth enabled automated space for the blind and visually impaired. In: IEEE International Conference of Engineering in Medicine and Biology Society. 2010. pp. 331-334
  35. 35. Kassim AM, Yasuno T, Suzuki H, Jaafar HI, Aras MSM. Indoor navigation system based on passive RFID transponder with digital compass for visually impaired people. International Journal of Advanced Computer Science and Applications (IJACSA). 2016;7(2):604-611

Written By

Anuar Bin Mohamed Kassim, Takashi Yasuno, Hiroshi Suzuki, Mohd Shahrieel Mohd Aras, Ahmad Zaki Shukor, Hazriq Izzuan Jaafar and Fairul Azni Jafar

Submitted: 15 June 2018 Reviewed: 02 July 2018 Published: 05 November 2018