Inspiration

Most games test how good you are.

Very few test how well you understand yourself.

In real life, performance and confidence don’t always match. We overestimate. We underestimate. And that gap shapes decisions far beyond games.

We wanted to build something simple but revealing — a daily challenge that measures not just memory, but self-calibration.

Not “Are you smart?” But “Do you know how you’ll perform?”

That question became Wrong Turn.

What it does

Maps: Wrong Turn is a daily global spatial memory challenge.

Every player receives the same 30x30 grid route.

Before starting, you answer one question:

How confident are you? (0–100%)

You then get 5 seconds to memorize the path — including turns and symbol markers.

After it disappears, you reconstruct it step by step.

When you finish, you see:

Where you made your first wrong turn

A visual comparison of your route vs the correct one

Your Confidence Error

We calculate:

$$Confidence Error = Predicted Confidence – Actual Performance$$

Players are ranked not only by accuracy — but by how closely their confidence matched reality.

The headline leaderboard is:

Who Knows Their Limits

It reframes competition from raw performance to calibrated performance.

How we built it

We focused on fairness, clarity, and scalability.

Frontend

React

Tailwind CSS

Devvit UI components

Backend

Reddit Devvit platform

Redis for:

Daily deterministic map storage

Player attempts

Sorted-set leaderboards

Calibration ranking

Core systems

Scheduled global daily reset

Deterministic map generation (same for everyone)

Validation engine for user-generated maps (5–15 turns, boundary-safe)

Community filtering (Trending, Hardest, Calibrated, New)

The architecture supports both daily global competition and a growing UGC ecosystem.

Challenges we ran into

Designing a calibration metric that feels intuitive and fair

Preventing trivial or exploitative user-generated maps

Balancing difficulty across a global player base

Ensuring confidence prediction feels meaningful — not cosmetic

Delivering feedback that’s immediate and clear without overwhelming the player

The hardest part wasn’t building the grid logic.

It was making self-awareness measurable.

Accomplishments that we're proud of

Turning metacognition into a core competitive mechanic

Launching a calibration-first leaderboard

Building a fully validated community map system

Creating transparent global stats (confidence gaps, common mistakes)

Designing a daily shared cognitive benchmark

We’re especially proud that performance isn’t the only thing that matters — understanding your performance does too.

What we learned

Many players are consistently overconfident — and are surprised by it.

Calibration creates deeper engagement than raw success rate.

Shared daily challenges drive repeat participation.

Simplicity in design increases cognitive focus.

A small rule change (predict first, then perform) completely changes player psychology.

We learned that awareness adds a second layer to competition.

What's next for Maps: Wrong Turn

ELO-style confidence rating

Adaptive difficulty based on calibration history

Personal confidence trend tracking

Seasonal calibration leagues

AI-generated maps tuned to individual blind spots

Expanded community analytics dashboard

Long term, we see Wrong Turn evolving into a daily cognitive benchmark — one that measures not just performance, but judgment.

Built With

Share this project:

Updates