✅ Full stack online: Backend + AI engine + MongoDB connected (~50 user capacity).

Platform Status

This deployment runs the full stack: frontend, backend API, AI engine, MongoDB, and Redis. Below is the current status of features.

Live & Functional

  • ✓
    Backend API — Node.js/Express server (port 3001) with health checks, CORS, rate limiting
  • ✓
    AI Engine — Python Flask service (port 5000) with local ML models for code analysis and tutoring
  • ✓
    MongoDB + Redis — Database and caching layer running (ports 27017, 6379)
  • ✓
    Challenge Generation — AI endpoint /challenges/generate creates adaptive coding tasks
  • ✓
    Code Execution (WebSocket) — Real-time code running via backend WebSocket with fallback to mock
  • ✓
    Judge0 Sandbox — Secure code execution environment for multiple languages
  • ✓
    Monitoring — Prometheus + Grafana for metrics and observability

Not Yet Implemented

  • âš 
    Authentication — Sign up/login, JWT tokens, user sessions (routes exist but not connected)
  • âš 
    User Progress Persistence — Completed challenges, XP, achievements not saved to DB yet
  • âš 
    Real-time Collaboration — Pair programming, shared sessions (WebSocket handlers ready, UI not integrated)
  • âš 
    AI Tutor Chat — Full conversational AI (endpoint exists, frontend integration pending)
  • âš 
    Learning Path Recommendations — Personalized curriculum (backend logic stubbed, needs ML training)

Demo Capacity

The current deployment is configured for ~50 concurrent users due to local AI model constraints (no GPU acceleration). For production use, the AI engine should be scaled with cloud GPU instances or switched to API-based services.

Want to test the live features? Try the coding challenge →