Platform Status
This deployment runs the full stack: frontend, backend API, AI engine, MongoDB, and Redis. Below is the current status of features.
Live & Functional
- ✓Backend API — Node.js/Express server (port 3001) with health checks, CORS, rate limiting
- ✓AI Engine — Python Flask service (port 5000) with local ML models for code analysis and tutoring
- ✓MongoDB + Redis — Database and caching layer running (ports 27017, 6379)
- ✓Challenge Generation — AI endpoint
/challenges/generatecreates adaptive coding tasks - ✓Code Execution (WebSocket) — Real-time code running via backend WebSocket with fallback to mock
- ✓Judge0 Sandbox — Secure code execution environment for multiple languages
- ✓Monitoring — Prometheus + Grafana for metrics and observability
Not Yet Implemented
- ⚠Authentication — Sign up/login, JWT tokens, user sessions (routes exist but not connected)
- ⚠User Progress Persistence — Completed challenges, XP, achievements not saved to DB yet
- ⚠Real-time Collaboration — Pair programming, shared sessions (WebSocket handlers ready, UI not integrated)
- ⚠AI Tutor Chat — Full conversational AI (endpoint exists, frontend integration pending)
- ⚠Learning Path Recommendations — Personalized curriculum (backend logic stubbed, needs ML training)
Demo Capacity
The current deployment is configured for ~50 concurrent users due to local AI model constraints (no GPU acceleration). For production use, the AI engine should be scaled with cloud GPU instances or switched to API-based services.
Want to test the live features? Try the coding challenge →