AI-Powered Alternative Credit Scoring Platform
Hybrid ML + Rule-Based scoring Β· Explainable AI (SHAP) Β· Dockerized Microservices
AltCred is a production-grade fintech platform that uses machine learning to generate credit scores for individuals with little to no traditional credit history. It analyzes alternative data points β employment stability, income patterns, financial discipline β and combines a rule-based engine with a Random Forest ML model using a hybrid scoring architecture.
Every prediction is accompanied by SHAP-based explanations, telling users why they received their score.
graph TD
A[π€ User] --> B[Next.js Frontend]
B --> C[Express Backend]
C --> D{Scoring Engine}
D --> E[Rule-Based Engine<br/>Weight: 40%]
D --> F[FastAPI ML Service<br/>Weight: 60%]
F --> G[Random Forest Model]
F --> H[SHAP Explainer]
F --> I[Model Registry]
F --> J[Feature Service]
C --> K[(Supabase PostgreSQL)]
C --> L[Analytics API]
L --> M[Monitoring Dashboard]
graph LR
A[Kaggle Dataset] --> B[Feature Pipeline]
B --> C[Training Script]
C --> D[Model Evaluation]
D --> E[Model Registry]
E --> F[FastAPI Inference]
F --> G[SHAP Explanations]
| Feature | Description |
|---|---|
| π§ Hybrid Scoring | Combines rule-based (40%) and ML-based (60%) scoring for robust predictions |
| π Explainable AI | SHAP integration returns top 3 feature impacts per prediction |
| π¦ Model Registry | Version-controlled model management with hot-swapping support |
| π Feature Service | Centralized preprocessing eliminates training-serving skew |
| π Monitoring Dashboard | Real-time analytics for latency, risk distribution, and model health |
| π³ Dockerized | Full-stack containerization with health checks and volume mounts |
| π‘οΈ Graceful Fallback | If ML service is down, system defaults to rule-based scoring (100%) |
| π Secure Auth | JWT-based authentication with bcrypt password hashing |
| Layer | Technology |
|---|---|
| Frontend | Next.js, React, Tailwind CSS, Recharts, Framer Motion |
| Backend | Node.js, Express.js, Axios |
| ML Service | Python, FastAPI, scikit-learn, SHAP, Pydantic |
| Database | Supabase (PostgreSQL) |
| Infrastructure | Docker, Docker Compose |
| Auth | JWT, bcrypt, Helmet, CORS |
- Node.js 18+
- Python 3.11+
- Docker & Docker Compose (for containerized setup)
- A Supabase project
git clone https://github.com/Archisman-NC/AltCred.git
cd AltCredCopy the example file and fill in your credentials:
cp .env.example backend/.envCreate frontend/.env.local:
NEXT_PUBLIC_API_URL=http://localhost:5000/api/v1# Backend
cd backend && npm install && npm run dev
# Frontend (new terminal)
cd frontend && npm install && npm run dev
# ML Service (new terminal)
cd ml && pip install -r requirements.txt
uvicorn ml.inference.app:app --host 0.0.0.0 --port 8000The entire platform can be started with a single command:
make runOr directly:
docker-compose up --build| Service | Port | URL |
|---|---|---|
| Frontend | 3000 | http://localhost:3000 |
| Backend | 5000 | http://localhost:5000 |
| ML Service | 8000 | http://localhost:8000 |
The backend automatically waits for the ML service to be healthy before starting.
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/auth/signup |
Register a new user |
POST |
/api/v1/auth/login |
Authenticate user |
POST |
/api/v1/intake/submit |
Submit financial assessment |
GET |
/api/v1/analytics/predictions-summary |
Prediction statistics |
GET |
/api/v1/analytics/risk-distribution |
Risk category breakdown |
GET |
/api/v1/analytics/model-performance |
Model accuracy metrics |
GET |
/api/v1/analytics/system-health |
System health status |
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Service health check |
POST |
/predict-credit-score |
Generate ML prediction |
POST |
/reload-model |
Hot-reload active model |
curl -X POST http://localhost:8000/predict-credit-score \
-H "Content-Type: application/json" \
-d '{
"age": 28,
"annual_income": 65000,
"monthly_inhand_salary": 5200,
"num_bank_accounts": 2,
"num_credit_card": 1,
"interest_rate": 8.5,
"num_of_delayed_payment": 0,
"outstanding_debt": 2500,
"credit_utilization_ratio": 15.5,
"total_emi_per_month": 450,
"monthly_balance": 2200,
"occupation": "Software Engineer",
"credit_mix": "Good",
"payment_of_min_amount": "Yes",
"payment_behaviour": "Low_spent_Small_value_payments"
}'{
"credit_score_category": "Good",
"confidence": 0.87,
"model_version": "credit_model_v1",
"explanation": [
{"feature": "outstanding_debt", "impact": -0.18},
{"feature": "annual_income", "impact": 0.15},
{"feature": "num_of_delayed_payment", "impact": -0.12}
],
"latency_ms": 34.2
}Every prediction includes a SHAP-based explanation showing the top 3 features that influenced the score.
graph LR
A[User Input] --> B[Feature Service]
B --> C[Random Forest]
C --> D[Prediction]
C --> E[SHAP TreeExplainer]
E --> F[Top 3 Feature Impacts]
D --> G[API Response]
F --> G
This enables transparency and regulatory compliance for credit scoring decisions.
- Algorithm: Random Forest Classifier (100 estimators)
- Accuracy: ~78.3% (3-class classification)
- Target:
Credit_Scoreβ Poor (0), Standard (1), Good (2) - Features: 15 financial attributes (income, debt, payment history, etc.)
- Registry:
ml/registry/model_registry.jsonfor version tracking - Hot-swap: Models can be reloaded via
POST /reload-modelwithout restarting
The /analytics page provides real-time visibility into:
- π Prediction Volume β Total and daily prediction counts
- β‘ Latency Tracking β Average ML inference response time
- π― Risk Distribution β Breakdown of Good / Standard / Poor scores
- π₯ System Health β Active model version and service status
AltCred uses a hybrid scoring architecture:
Final Score = (Rule Score Γ 0.4) + (ML Base Score Γ 0.6)
| ML Category | Base Score |
|---|---|
| Poor | 400 |
| Standard | 650 |
| Good | 800 |
If the ML service is unavailable, the system gracefully degrades to 100% rule-based scoring.
AltCred/
βββ frontend/ # Next.js application
β βββ src/pages/ # Pages (Home, Dashboard, Analytics)
β βββ Dockerfile # Multi-stage production build
βββ backend/ # Express.js API server
β βββ src/modules/ # auth, intake, credit-score, analytics
β βββ src/services/ # mlService.js (ML client)
β βββ Dockerfile
βββ ml/ # Python ML workspace
β βββ inference/ # FastAPI app, Feature Service
β βββ registry/ # Model registry (JSON)
β βββ models/ # Serialized models & encoders
β βββ Dockerfile
βββ docker-compose.yml # Orchestration with health checks
βββ Makefile # Dev CLI (make run, make stop)
βββ .env.example # Environment variable template
βββ LICENSE # MIT License
| Service | Recommended Platform |
|---|---|
| Frontend | Vercel |
| Backend | Render (via Blueprint) |
| ML Service | Render (via Blueprint) |
| Database | Supabase |
To deploy backend and ML services on Render, simply connect your GitHub repository and Render will automatically detect the render.yaml Blueprint file and provision both services. Ensure all environment variables from .env.example are configured in your deployment platform.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License β see the LICENSE file for details.
Built with β€οΈ by Archisman Nath Choudhury