Inspiration

In language learning, the most important aspect is mastering everyday expressions. Phrases like "Oh, I woke up in the middle of a nap," "Ah, I totally bombed my exam today.. Should I just jump out of the window now?" or "This food was on the floor, but it's okay to eat, right?" are examples of practical sentences not typically taught in educational settings. The consensus is that being able to use such practical sentences can significantly accelerate proficiency.

There are countless apps out there for translating phrases. However, there seems to be a lack of apps that allow users to save and repeatedly review expressions they've looked up themselves. As a current exchange student working on improving my English, I've experienced considerable inconvenience in this area.

Therefore, I've decided to create an app that offers translations in a more practical context through AI, and allows users to save and review these translations for sustained learning.

What it does

You can choose your mother tongue and the language you want to learn.

Through the chatbot, before you speak, you can write down the expressions you've thought about or imagined, and see them translated into practical phrases. It's possible to save these translations in the database for review.

The app can also translate videos from various platforms like YouTube, Instagram, Facebook, etc., in real-time. While watching, you can check the expressions used and save the translations along with the original content in the database.

How we built it

In this project, the MVVM (Model-View-ViewModel) design pattern was employed to enhance maintainability and reduce code duplication.

A chatbot was implemented by utilizing OpenAI, refining prompts to achieve effective communication.

Using the Android API MediaProjection, access was granted to the phone's speaker system to capture audio. This audio was then converted into PCM (Pulse-Code Modulation) raw data for usage.

Furthermore, Google Cloud's Speech-to-Text API was leveraged to translate the captured sound in real-time, enabling the application to understand and process spoken language instantly.

Challenges we ran into

Accomplishments that we're proud of

I'm particularly proud of the feature in our app that captures internal audio.

Previously, the known methods to obtain audio from within an app were limited to recording in-app videos, capturing sound directly through in-app recording, or increasing the app's volume to record it externally with a microphone, and then analyzing that file.

I found a less conventional way to convert the captured audio into data that could be utilized by Google Cloud Speech-to-Text. This achievement was a significant milestone for the project, allowing us to innovate beyond the limitations of traditional approaches.

What we learned

What's next for DailyLang

I'd like to incorporate another feature that addresses an inconvenience I've experienced. I want to add the ability to recognize text through computer vision technology, translate it, and then save the translation.

Additionally, due to time constraints, I was unable to include a feature that vocalizes each sentence, but I definitely want to add this functionality as well.

Built With

Share this project:

Updates