Inspiration

We wanted to use Modal in a unique way, and thought that its capabilities to spin up GPUs live, adjusting for demand, and the overall server-like infrastructure would be perfect to deal with data streams and model training.

What it does

The idea is that we can host a model that gets fed a stream of data. This could be text, video, or other modalities. As data comes flowing in, a base model beings to learn through fine tuning the patterns of that data that comes in. This mean the model continuously evolves to the incoming data. Then, at any time, we can deploy that model to answer questions based on what it learned and observed from the streamed data. It is almost like what humans do.

How we built it

We built it using modal. Basically we host several app functions. One is to manage the data being streamed in, another is to train LoRA adaptors on top of a base model that learns on the streamed data. A final app function uses those LoRA adaptors that constantly evolve with the streamed data to answer questions.

Challenges we ran into

We found some issues with dependencies with setting up Modal. This was the main setback for us especially being a time-constrained hackathon.

Accomplishments that we're proud of

We are proud of setting up a pipeline through Modal that facilitated this process.

What we learned

We learned about the amazing capabilities of Modal and its role it can serve in providing flexible and intuitive access to compute embedding within a server-like workflow.

What's next for Streambrain

We see this approach being applied to other domains. Streambrain can essentially serve as running machine brains that evolve constantly with new data, adjust accordingly to statistical drifts, and answer questions as needed. This could be applied to short-term visual memory, time series data like in financial markets, or more.

Built With

Share this project:

Updates