Developments In Mobile Applications For Visually Impaired Individuals
Introduction
This essay presents a literature review in the field of mobile application development as well as other assistive tools for visually impaired individuals. The research review in this study is presented with regards to the year of publication from the oldest to the most recent. At the end of the review, a summary is presented to make clearer and easier the understanding of what each research work presents and also try to compare the features of the proposed work to that of the reviewed works.
Literature Review
Haddad, Chen, and Krahe, (2016) in their research work presented a proposed method that could provide a fast and simple solution to the issue of visual impairment by offering a tool attempts to automatically find the primary information that is portrayed by an image and then communicate it to a visually impaired persons. They reported that it takes the system little time to find a relief image which significantly simplifies the task of the creator and that their method makes it possible for any person in the same environment with a blind person to easily create a relief image for him or her. With regards to the recent improvements in scientific research in the fields of pattern recognition and image processing, they put forward a solution to the problem through text detection, recognition and transcription in Braille and then segmenting the different image areas and textures affiliation. They carried out an experimental study with eight blind people and eight pedagogical images to see whether blind people can understand the content. The different relief images were presented to the participants, and the participants were allowed enough time to be experiment and understand the content of the image in relief and then given a questionnaire to fill afterwards. However, they suggested that the work could be lengthened to put forward a tool for scanned or downloaded numeric and web graphics. They also suggested that a relief image system that is tablet compatible having voice synthesizer or a feedback-electro-vibration touch screen.
Sandnes, (2016) reported in their study that the recent development in affordable wearable devices creates new prospects for ground-breaking visual aids. They aim to identify the functionalities needed by visually impaired individuals in different contexts to reduce barriers. A semi-structured guide was employed to gather information from three visually impaired academic individuals. Their research shows that the main challenge or problem for low-vision individuals is recognizing peoples’ faces. The second most significant challenge for them is recognizing text on buildings or structures and moving vehicles. An interesting finding by the interview was revealed questioning the use of smart glasses. They however, suggested that future studies should focus on development systems for facial and text recognition and how to test them in different contexts.
Stearns et al. , (2016) carried out a controlled laboratory study consisting of 19 blind individuals to deeply measure the efficiency of finger-based sensing and feedback used for reading printed text. They made a comparison on an iPad-based test bed between audio and haptic directional finger guidance. To complement their study, they requested four of the participants to give feedback on a prototype called HandSight. Their findings shows that the performance between haptic and audio directional guidance is equal despite the fact that audio may have an advantage of accuracy for tracing lines of texts. The ease of use and the level of required concentration was questioned even though many participants valued the direct access to information delivered by the finger-based study. Moreover, they suggested that future study on finger-based reading should try to examine the possibility of putting text-heavy materials capability into consideration for the benefit of users with low vision in the case of finger-based readers.
Szpiro, et al. , (2016) in their study carried out a contextual inquiry in the form of an interview over the phone to find out whether the participants were actually low vision by asking if they are using or have used aids that enhanced their vision. They examined 11 low vision individuals with their mobile phones, tablets, and computers by carrying out some tasks like reading an email. Their research shows that many individuals preferred visual access to information than screen readers and that the tools could not provide with the right and sufficient assistance. They also found that for a participant to view a content comfortably, they have to perform multiple gestures. The challenges found made the individuals unproductive. Other challenges revealed were that low vision software utilities were difficult to use and the participants mostly did not use some tools because they find it difficult to disclose their disability. Torres-Carazo, Rodriguez-Fortiz, and Hurtado, (2016) in their research work examined 94 applications that were precisely developed for visually impaired individuals. They tried to analyse if the applications could be considered as serious games and at the same time suitable for use by the visually impaired persons based on their characteristics. They however reported that the objective of their study is to improve the perceived inappropriate classification of such applications, thereby also improving there searchability. They added that this will help them deeply in making recommendations to individuals with visual impairment.
Voykinska, et al. , (2016) carried out a research with the use of Social Networking Services (SNSs) to discover enthusiasm, difficulties, activities and familiarity of people with visual impairment with regards to the visual content. 11 people participated in an interview and 60 people participated in a survey carried out by the researchers. The selected sample included individuals with little to no vision. It was found that the blind individuals faced accessibility difficulties. To efficiently access the SNS features, they came up with variety of strategies which later failed. Then, they turned to asking for help from trusted individuals or simply shunned some features. Their study claim to create better understanding of the usage of SNS by blind persons. However, the perception of trust when there is need for interaction partners was raised. Finally, the researchers suggested that the designers of SNSs should consider designs that will bring advancement in social networking for users; be them able or disabled.
Zhao, et al. , (2016) presented an augmented reality application called CueSee which runs on a head-mounted display (HMD) that could help make product search. The system uses visual cues to draw the mind of the user to a product after automatically recognizing it. They designed five visual cues within the application. To evaluate the visual cues, they engaged 12 participants with visual impairment. To find out whether the participant fits their study, they conducted a screening over the phone in the form of an interview. Volunteers who have used assistive tools like magnifiers or CCTVs were chosen as fit for the study over those who only made use of screen readers. They reported that the individual volunteers were found to have different vision conditions. Moreover, their study revealed that the participants chose CueSee over regular assistive tools for product searching in stores. They also found that their application performs even better than the corrected visions of the participants in terms of efficiency and correctness. They suggested that in the future they will consider designing a more suitable interaction method for the application users to target products and also generating the best visual cues for different groups of users. They finally propose to conduct the evaluation of the application in real sense, for example as a grocery shop to see how feasible CueSee is.
Gonnot, Mikuta, and Saniie, (2017) presented a study in which they came up with an algorithm that could be used in helping people with impaired vision to be able to recognize their surrounding environment through the manipulation of captured pictures from a camera into controlled frequency composed together to single melody which is played back to the user. The images could be from the camera of a smartphone or implanted into eyeglasses. They reported that the system might be uneasy for a normal user to comprehend all information presented through this approach, but that the positive thing is that training the users with impaired vision will make it easy for them to interpret the data. They further argue that developing a device for people with visual problem’s objective is to make it as less complicated as possible, and so their proposed algorithm was made extremely simple which could possibly be run on small devices. Their algorithm was executed and tested with some images in MATLAB. A spectrum analyser called Spectrum Lab was fed with the audio which demonstrates a waterfall illustration of the audio. Moreover, the initial results shows that images with adequate resolution could be transformed to identify shapes, traffic signs or deepness for collision prevention. They suggested that in future, the algorithm should be optimized and its deployment on mobile platform should also be looked into. They also added that the algorithm could be executed on a hardware directly.
Jaramillo-Alca and Luja N-Mora, (2017) reported that inability of usage of serious games affects people with disabilities from having access to knowledge on an equal grounds with those without disabilities. They however carried out a study with the aim of supporting people with visual impairment who have difficulties in accessing video games due to their condition, more specifically serious games. Their work mainly presented a collation and exploration of guidelines for accessibility with regards to video games development for the need of persons that are visually impaired. Putting in consideration, the approach for their study, they chose to use the Serious Games CEOE which happens to be the only mobile application that falls to the educational category. They downloaded the app from the Google play store. They reported that it includes five different serious games for promoting daily healthy life. However, they suggested that the experiment could be carried out with people suffering from visual impairment so as to measure the efficiency of the features of the serious games pointed out for this study. Additionally, they suggested putting into consideration people with different disabilities from visual impairment.
Jiang, et al. , (2017) reported that the advancement in new technologies has boosted the invention of systems with the intention of providing information for people with visual impairment regarding their immediate environment. They carried out a project by developing an application with the use of current technologies like the Optical Character Recognition (OCR) and Text-to-Speech (TTS) for the Android platform. These technologies are employed to detect and identify signs and texts within the surrounding environment of a visually impaired person and help guide them to navigate. They reported that the system works with computer vision and internet connectivity to also restructure sentences and then change them to sound. The system uses a smartphone camera to find the various sources of information in the environment and then inform the user about their location using Text-to-Speech techniques. OCR is also used by the system to read about variety of sources and relate their content to the visually impaired person. To carry out a usability test of their system, the application was used on an android device to take pictures, and then carried out an OCR and sign detection. The text recognized by the application is shown over the image, and when the sign is touched on the screen, it reads out the text to the user. They concluded that their experiment shows that the concept of the system is feasible on Android smartphones. They added by suggesting that it could be extended in the future to have real-time implementation instead of the still images.
Pundlik, et al. , (2017) postulated that viewport control using head motion can be natural and assist in having access to magnified displays. They employed Google Glass to execute the idea which shows the magnified screenshots that are received via Bluetooth in real time. Users can see different screen locations by moving their head and greatly interacting with the smartphone, rather than using touch gestures on the magnified mobile phone display to navigate. Two different applications forms the screen share application; a host application on the mobile phone and a client application on the Google Glass. To carry out an evaluation of their approach, 8 normally sighted and 4 visually impaired participants were assigned tasks using a calculator and music player applications. The result of their evaluation shows that the Glass is more efficient than the phone’s screen zoom in the calculation task given. The performance measurement was carried out based on the time to complete the task. However, they suggested that in the future the implementation could allow for more gestures on the Glass for better interaction with the mobile device. And also that, the navigation based on head motion should be compared with other generally used vocal based mobile convenience features.