Inspiration
Every game of League of Legends begins with the most important event: the draft phase. We're a team of middle school friends who've been playing League together for almost a decade, and there's one question we still ask each other almost every game: "What champion should I pick?"
This led us to create Yalvon, a drafting assistant that leverages GRID historical match data, predictive modeling, and AI to help a team crush the draft phase.
What it does
Yalvon is a tool designed to be used by a professional team during the draft phase before a professional match.
The user inputs a Home Team and an Opponent Team (assigned to Red/Blue sides). Users can then press the Prompt Suggestion button to receive AI-driven predictions, reasoning grounded in historical data, and confidence scores.
As the draft progresses, the tool adapts:
Ban Phase: Suggests optimal bans for the home team and predicts the opponent's likely bans
Pick Phase (Home Turn): Suggests champions for the home team and predicts the opponent's counter-pick.
Pick Phase (Opponent Turn): Predicts the opponent's choice and suggests immediate counters for the home team.
End of Draft: Returns a predicted win rate and a post-draft analysis of key win conditions
Throughout the draft, a dedicated model calculates a Delta Win Rate, which is a live indicator showing how much a specific champion lock-in increases or decreases the team's probability of winning. This is displayed with its corresponding champion alongside historical win rate and matches played.
How we built it
Our first step was to extract historical match data from the GRID API and build a baseline draft picker application.
- Data Pipeline: We used Python to gather all available series IDs and download their end-state
JSONfiles. We then imported these into R Studio to extract and clean the necessary draft information into a masterCSVfile. Frontend: The graphical interface was built using
PyQt5. It consists of modular containers housing each application component. We utilized the Junie AI assistant from JetBrains to generate stylesheets, removing much of the boilerplate required to style the widgets.AI Integration: We integrated Google Gemini as a chat assistant. A backend service normalizes
CSVdata and utilizes the Google Gen AI SDK to query the Gemini model. The responses are parsed into structured UI components, providing confidence scores, synergies, and counters, while also allowing for free-form chat with the user.
Finally, we trained a CatBoost model on our extracted data to predict match outcomes based on the final draft state. We piped these predictions into PyQt5 for real-time visualization and refined the Gemini prompts to ensure the AI remained context-aware and focused on drafting utility.
Models
Our AI assistant is Google Gemini that we provide our CSV data to. It is necessary to input a Google Gemini API key to use this application.
Our predictive model is a CatBoost gradient boosted decision tree model. On our testing data, it predicts the winner of the match 63% of the time, given the full final draft and team matchup.
Warning for the models: Since Gemini and CatBoost are separate models, the end game win predictions may vary greatly. Predictions will tend to be inaccurate when choosing unusual team compositions or champions that have a significantly low play rate, due to a lack of respective training data.
Both the predictive and generative models are subject to a knowledge cutoff, which impacts their ability to account for the most recent meta shifts. Our training data spans early 2024 to late 2025, with the final observation recorded on 2025-09-28. Because champion strength varies greatly between patches, the model excels at identifying broad, long-term trends but may be less accurate regarding specific balance changes introduced after September 2025. Similarly, the Gemini model has a knowledge cutoff of January 2025. While it can reason effectively about drafting concepts and synergies, it will not be aware of new champions, reworks, or items introduced in the most recent updates.
Challenges we ran into
Data Wrangling: None of us had experience with GraphQL, making the GRID API query process a steep learning curve. Significant trial and error was necessary to get the structure right.
Data Quality: Many games had incomplete or incorrect data. We encountered teams with 6 players, 2-minute "remake" games, and anomalies showing 100-minute matches. Cleaning this required a robust R script to sanitize the dataset.
AI Structuring: Convincing Gemini to output strict
JSONfor our UI widgets, without hallucinating or breaking Pydantic schemas, required significant prompt engineering. We also ran into token limits when passing large amounts of context to Gemini.Learning Curve: We all learned
PyQt5from scratch for this project, solving UI responsiveness and styling issues on the fly.
Accomplishments that we're proud of
A "Full-Stack" Data Pipeline: We successfully built a pipeline that travels from a raw GraphQL API (GRID) to statistical cleaning (using R), into a machine learning model (CatBoost), and finally to a user-facing application (Python/PyQt).
63% Prediction Accuracy: Achieving a 63% accuracy rate on match winners based purely on draft composition is a significant milestone, proving that our model captures genuine signal from the data.
Seemless AI Integration: We managed to make Google's Gemini 3 talk to our Python backend, allowing users to have a natural conversation about the draft while the system runs analytics in the background.
Building a Desktop App from Scratch: We went from knowing nothing about PyQt5 to shipping a fully styled, functional desktop application with dynamic widgets and real-time updates.
What we learned
The Reality of "Big Data": We learned that 80% of Data Science is just cleaning the data. Handling edge cases in the GRID export taught us the importance of robust data validation.
Structured AI Outputs: We used Pydantic to force LLMs to act as functional APIs rather than just chatbots.
Cross-Language Integration: Combining Python, R, and API queries taught us how to manage dependencies and handoffs between different technologies.
UI/UX Logic: We learned how to manage state in a desktop application, ensuring that when a user updates a pick, the win rates and predictions update instantly across the entire interface.
What's next for Yalvon: AI Draft Assistant
Live Data Ingestion: Currently, our model is trained on data up to late 2025. We plan to automate the pipeline to fetch and retrain on the latest patch data daily/weekly to keep up with the meta.
Automated Draft Detection: Instead of manual input, we want to implement screen capture to automatically detect champions as they are locked in.
Web Migration: Porting the frontend to React would make the tool accessible to the general player base without requiring a desktop download.
Log in or sign up for Devpost to join the conversation.