Inspiration

We knew that this year we wanted to build a robot that would put all of our skills to the test. We wanted to implement computer vision, AI, and autonomous navigation. When we first entered the competition room, we walked up to the NJ Transit booth. We met the representatives and learned that they were holding a special prize for teams which come up with solutions to problems in mass transport. We wanted our project to be transportation themed and to solve a problem in that industry.

What it does

One of the biggest problems mentioned was bleed from ridership data and ticket fee collection due to human error. So we thought to repurpose our original idea to build a tool which can help conductors collect tickets and perform these duties.

The robot firstly is designed to walk up and down a train aisle. It does this by measuring its distance based on the four motor RPMs and a bit of math we calculate. When it reaches a seat, the robot moves a camera apparatus, mounted on a servo, from side to side.

For each side that the camera looks at, a machine learning model is applied to identify how many passengers are in each bench seat. This is to provide an accurate headcount for passenger ridership, which was one of the main problems the NJ Transit representatives brought up.

After collecting this data, a small LCD screen on the robot asks the passengers to present their NJ Transit tickets to the camera apparatus. A computer vision algorithm is called to scan the QR code, and determines if it is a valid code or not.

After collecting tickets and a good headcount of everyone in a particular row of seats, the robot makes its way forwards to the next row of seats.

How we built it

For hardware, we utilized the resources available to us in the CAVE and the Hackerspace at Hill Center. These locations had the the tools, but we bought our own supplies, arduinos, relays, wires, power supplies, and other electronic equipment.

For our robot controls, we used python's raspberry pi GPIO library to manage the pinout signals from the pi that sits onboard the robot. The pinouts then control relays that are wired in such a way to control the movements of the wheel motors and the camera servo. We used several layers of abstraction in our code to make controlling the robot as trivial as possible.

Our higher level algorithms are primarily built on OpenCV. The library was used when implementing the facial recognition model and our QR scanning algorithm. The implementation involved changing lots of model parameters to make sure we recognized customers correctly, even in complex background environments.

Data collection and robotic control was handled by the PI, but actual data processing was done offshore on our laptops. We set up a webserver and a live stream to transfer collected data from the PI to the laptops. The laptops then would process the algorithms we had, and we tuned parameters to insure accurate model results while also maintaining a reasonable computation time.

Challenges we ran into

Accomplishments that we're proud of

A big aspect of our project we pride ourselves on is our commitment to abstraction. We all constantly rotated through roles throughout the duration of the hackathon, so we had to design our code to be independent and abstracted to allow for parallel software development.

Another accomplishment we are proud of is just the sheer amount of different technologies implemented in this project. We used high level ML models to identify passengers Computer Vision to read QR codes, and implemented low level drivers for controlling the robot. We also developed a complex webstreaming service to offload processing to our laptops.

Built With

Share this project:

Updates