Inspiration

Our initial two brainstorms were to involve this API in the field of customer service to boost efficiency and to quantify emotions in conversation by analyzing diction and syntax. Merging these ideas, we were inspired to build an application that enhances a user's oral and linguistic skillsets, specifically targeting language comprehension, emotional analyses, and professional development. Alongside this, we realized this could have a huge use-case for businesses to improve their business-to-consumer interactions (in the customer support representative field).

What it does

Our application branches out into two major sectors: personal and business. The personal aspect focuses on enhancing a user's interpersonal speaking skills by analyzing their call transcriptions and parsing them through a sentiment analysis LLM. The business aspect enables employees to enhance customer satisfaction levels by using the app to analyze various factors of customer support conversations, such as length of call and responsiveness, patience, and cooperation of consumer.

How we built it

Using a combination of Figma for user interface, React for pseudocode, T-Mobile's YNA API and OpenAI's ChatGPT API we we're able to map, design, and build our application.

Challenges we ran into

We were challenged by the implementation of the API (YNA), as we wanted to best reflect the capabilities of the powerful API in our application.

Accomplishments that we're proud of

We find pride in the fact that we are tackling issues in both the consumer and business worlds, and understand that future implementations of our application could be vast.

What we learned

We learned how to brainstorm the ideation of an app and how it could prove its potential in the real world. Learning about how to map out, design, and develop an application using an API is one thing, but actually getting started and working through the designing, development, and implementation is another. We learned a lot about both front and back end development and are excited to learn more about both fields.

What's next for AISpEAK

The coding aspect! And synthesizing it into a functional app. We want to build our own Large Language Model (LLM) to remove reliance on ChatGPT. As good as GPT is, it provides variable results, so implementing our own LLM will allow us to implement more accurate and real-time data. We also hope to increase functionality from a written transcript to audio and video files. Having multimedia allows for more accuracy in detecting traits such as pauses in speech and body language. Taking a polymorphic approach not only diversifies AISpEAK but makes it more accurate.

Built With

Share this project:

Updates