Inspiration

It can be confusing to navigate through a dark-lighted area or an underground subway station, but imagine how difficult it would be for the visually impaired. A probing cane certainly helps by identifying obstacles in the way, but the information transmitted back to the user is simply limited. For one thing, the obstacles could any object or thing. Not to mention that the probing cane can only reach so far.

Knowing people around us who are visually impaired, although they are very independent on their own, they are still much more comfortable with personal guidance because we can provide them with more descriptive information when it comes to navigating. This inspired us to develop a smart technology that can provide the same visual description as if it is from someone who has regular vision.

What it does

YouEye monitors the visual environment straight ahead and sends descriptive visual speech guidance to the user informing the user exactly what the obstacle is and its distance from the user. As the user is walking, the camera constantly takes screenshots ahead which are analyzed through google-visualization to identify the obstacle in the screenshots. The distance and description of the object are then sent back to the user as speech feedback in real time. Depending on the distance of the object, YouEye would give different warning or feedback. If the object is 5 meters away, YouEye would inform the user the pathway is all clear. As the distance of the object ahead decreases below 5 meters, the feedback message escalates from caution and finally warning message while providing the description of the obstacle and its distance away from the user. This way, the user is well aware of their surroundings to make more accurate judgments of their actions when they are traveling anywhere, anytime.

How we built it

We used a camera to take consistent photos which can be analyzed through google-visualization to identify the top two likely labels for the object in the photo. With the help of an ultrasonic sensor connected to an Arduino, we are also able to measure the distance between the user and the obstacles ahead. All the information collected is then transformed to a speech feedback for the user in real time to provide accurate guidance and navigation when the user is moving.

Challenges we ran into

We ran into several challenges while building our project. The first challenge being finding a java interface for the Arduino so we can incorporate the information collected from our ultrasonic sensor to our java program which contains the google-visualization API to analyze the label of the object in the photos. Then, we had to synchronize the distance feedback from the sensor with the photos taken by the camera to make the information we collect from the two sources are accurate and in real time. Finally, we had to incorporate an API to convert our summarized information to a speech form in real time to make sure the user is getting accurate feedback without delay.

Accomplishments that we're proud of

We are very proud to have achieved the original goals and functionalities we have designed for YouEye. YouEye is able to generate fast and accurate information from two different sources and combine the pieces into useful information. The output of YouEye is simple and easy to understand, which was our intention when we processed complicated information in the back-end because we wanted to make the information in a very presentable way to the user to make the use of YouEye as convenient as possible.

What we learned

We had a deeper understanding of how difficult it is to navigate ahead without a visualization of the environment. It is difficult to process the photos and distance information at the same time and in real time and a lot of effort is involved in providing an accurate feedback to the user.

What's next for YouEye

For the next steps, we hope to present even more accurate information to the user. Currently, YouEye only gives feedback on what the object is, we would like to add more descriptive features such color, size and more specific warning.

We would also like to generate more complete pictures of the environment ahead of the camera. YouEye can currently identify objects straight ahead but has some difficulty identifying obstacles such as water and smaller objects on the ground as well as obstacles above. Our goal is to provide a guidance device that analyze the environment in an 360 degrees perspective that the user can be fully comfortable of using.

Share this project:

Updates