Inspiration

During the recent times, a lot of people have started to feel comfortable sharing their thoughts with AI agents as they feel more comfortable sharing their feelings as the agents tend not to judge them. However, a lot of people have also raised concerns about their personal data being stored, which inspired us to come up with a web application that eliminates the risks of data leak as the user's personal data is not stored anywhere.

What it does

Users can use it to journal / talk to an AI agent via two ways: through speech and through text. Once they end their session, they can get feedback in case the agent feels they might need help and guide them to respective resources.

How we built it

An interactive frontend was made with the help of React, TypeScript , Vite , React Router , Tailwind CSS. A server was made on node.js which ran the ML model. Authorization step was completed by using JWT and bcrypt. The working model was trained using python, Fast API , pytorch, DeBERTa. Both VAPI and Open AI APIs were used to make the voice and chat assistant.

Challenges we ran into

One of the challenges we ran into was the feature engineering as it was challenging to train a multi-classifier model in a short amount of time. We eventually eliminated a few features from the dataset and built two separate models, one classifying the situation of the user as normal, anxiety, depression, and stress. The other model simply just determined if the user is presenting suicidal tendencies or not. We decided to separate this concept from others since it was a feature that required more attention than the others. Additionally, that feature particularly was performing really poorly on our tests, which increased the risk of false negatives. Therefore, our main focus was to maximize the recall for it.

The voice assistant tuning was a tedious task as the assistant should be trained such that it doesn't interrupt the user in between the conversation and a natural conservational flow is maintained throughout the call. Prompt engineering the assistant so that it has all the necessary pretext and it follows what the goal of the voice assistant was took time to tune but was finally done after struggling for a while.

Accomplishments that we're proud of

Finally being able to build and ship a complex ML model that works flawlessly with our system to give amazing results.

What we learned

The initial idea must ALWAYS be prototyped ASAP so that we can get a good idea about the limitations of the idea.

What's next for Lumina Mind

We'll continue increase the complexity of our app, and considering the progress we made in just 24 hours, the sky's the limit for us in terms of scaling and expanding as a group and as a company.

Built With

Share this project:

Updates