Finding a job should be fun!
Scrolling Indeed is boring! Navigate the expanse of space, visit stars (Job listings), and conquer them (pass interviews). If you manage to pass, you get the URL to actually apply.
Inspiration
With ~75% of IT interviews being conducted remotely, do you have what it takes to pass? Intermew is a game designed to refine users' ability to perform in a realistic interview environment, where the end goal is not just to score high, but to get the job. We were interested in working with high volume streams of data, particularly vision data.
What do you mean a real job? Well, we webscrape Indeed to get information on real jobs in and around Durham. We then use this job data as context for the interviewer, who is a model in both the GPT4 and fashion sense. Although, this isn't just an OpenAI wrapper - so how do you pass?
- We use facial computer vision to track emotion, ensuring you're showing the right attitude.
- Additionally, make sure to keep your head in centre of frame, as we judge this using head tracking.
- Technically Challenged? You're not getting away from our in-interview DSA questions using a handcrafted VIM-style editor.
- Oh, and don't worry! We visualise this all at the end so you can see how you did.
TLDR: We webscrape a bunch of job listings from Indeed, use each of these job listings to generate a 3 stage interview that you get to talk through with a Chatbot. You get evaluated on semantic performance (do you know your stuff?), attitude (we monitor facial expressions) and head positioning! Oh, you also get a bunch of feedback, which is great!
Why is it useful? This could be cool tool for people to interactively find jobs, whilst preparing for that interview experience. The system should ask questions very specific to the job, it once said I wasn't qualified because I didn't do enough PHP. While specific stuff can always be inaccurate, it also gives a variety of facial metrics and a paragraph of feedback to the user!
How we built it
The project consists of a huge gamemaker file that runs the game, the visualisation of him, the navigation menus. This really consists of a bunch of stuff just being smashed together, we're doing everything from Speech Recognition, Text to Speech, Facial Recognition, LLM stuff, Webscraping - it's a multifaceted, multithreaded beast that knows no bounds. We wanted to try and push the boundaries of the sheer amount that could be integrated into 1 hackathon project, even the main menu is set in the endless expanse of space.
Data Collection (Python)
To make an interview assistant, the first thing you need is interview data! We collected data by webscraping popular job listing websites like Indeed and Linkedin, using fields such as job description, salary, and location (all listings scraped are from Durham in the past month!).
Next, we worked on live-time transcription of the user's speech as they reply to the interviewer. This involved some finetuning to filter out background noise (and debugging sessions on the empty stairwell)...
We also collected auxillary data to measure some performance metrics of the interviewee, such as emotion detection and head movements. These metrics are displayed to the user at the end of each interview.
For the coding interviews, we webscraped Leetcode questions and answers to use alongside our custom text editor. All of this is transmitted through a TCP socket from Python to Gamemaker. :)
Challenges we ran into
- Multithreading, the end system has over 10 simultaneous threads running in the Python application, we faced a few race conditions and strange threading issues.
- Phantom files, at around 2am GameMaker started hallucinating files that didn't actually exist on the disk. -Merge conflicts, we suffered a particularly huge merge conflict at around 1am that caused us to lose an hour's worth of work!
- Gamemaker doesn't really support GUIs so we had to make our own implementation a VIM-like IDE which was fun. It doesn't even support stuff like CAPS_LOCK and SHIFT easily so we had alot of key mapping to do.
- Gamemaker again, doesn't support videos so we had to send these in .png format over a binary socket and render these in real time.
- Again, the entire thing is a huge multithreaded beast.
Accomplishments that we're proud of
Art (I love him). Use of new technologies Finished product
What's next for Intermew
We really wanted to do some sentiment analysis on the speech (stutter detection, frequency of words analysis, etc.) and are looking into continuing the project in the future. Also, we would like to integrate a Firebase server so people can post custom job listings and anyone else can simulate that interview. Theres so many directions we could take this, but we feel like this is a good result for only 24 hours of effort!
Jokes aside. He's definitely judging you.


Log in or sign up for Devpost to join the conversation.