Spaces:
Running
๐ฆ FocusFlow - AI Productivity Accountability Agent
Your Duolingo-style AI buddy that keeps you focused while coding
๐ฏ The Problem
Developers with ADHD and procrastination tendencies struggle with:
- Task paralysis: Breaking projects into manageable pieces
- Context switching: Getting distracted by unrelated files/tasks
- Accountability: No one keeping them on track during work sessions
- Progress tracking: Not knowing if they're making real progress
โจ The Solution
FocusFlow is an AI-powered productivity partner that:
- Breaks down projects into 5-8 micro-tasks (15-30 min each)
- Monitors your workspace in real-time (file changes or text input)
- Provides Duolingo-style nudges when you get distracted or idle
- Tracks focus metrics with streaks, scores, and visualizations
- Integrates with LLMs via MCP (Model Context Protocol) for natural language task management
๐ Key Features
๐ฏ AI-Powered Project Onboarding
- Describe your project in plain English
- Get actionable micro-tasks instantly
- Smart task generation based on project type (web, API, etc.)
๐๏ธ Real-Time Focus Monitoring
- Local Mode: Watches your project directory for file changes
- Demo Mode: Simulates workspace with text area (perfect for HuggingFace Spaces)
- Content-aware analysis (reads code changes, not just filenames)
๐ฆ Duolingo-Style Personality
- On Track: "Great work! You're making solid progress! ๐ฏ"
- Distracted: "Wait, what are you working on? That doesn't look like the task! ๐คจ"
- Idle: "Files won't write themselves. Hoot hoot. ๐ฆ"
๐ Productivity Dashboard
- Focus Score: 0-100 rating based on on-track percentage
- Streaks: Consecutive "On Track" checks ๐ฅ
- Weekly Trends: Visualize your focus patterns
- State Distribution: See where your time goes
๐ Built-in Pomodoro Timer
- 25-minute work sessions
- 5-minute break reminders
- Audio alerts + browser notifications
- Auto-switching between work and break modes
๐ MCP Integration (Game Changer!)
Connect FocusFlow to Claude Desktop, Cursor, or any MCP-compatible client!
Available MCP Tools:
add_task(title, description, duration)- Create tasks via conversationget_current_task()- Check what you should be working onstart_task(task_id)- Begin a focus sessionmark_task_done(task_id)- Complete tasksget_all_tasks()- List all tasksget_productivity_stats()- View your metrics
MCP Resources:
focusflow://tasks/all- Full task listfocusflow://tasks/active- Current active taskfocusflow://stats- Productivity statistics
๐ฆ Installation
Quick Start (Demo Mode)
# Clone the repository
git clone https://github.com/Rebell-Leader/FocusFlow.git
cd FocusFlow
# Install dependencies
pip install -r requirements.txt
# Run in demo mode (no API keys needed!)
python app.py
Open http://localhost:5000 in your browser.
With AI Provider (Optional)
FocusFlow supports multiple AI providers:
# Option 1: OpenAI
export AI_PROVIDER=openai
export OPENAI_API_KEY=your_key_here
# Option 2: Anthropic Claude
export AI_PROVIDER=anthropic
export ANTHROPIC_API_KEY=your_key_here
# Option 3: Google Gemini
export AI_PROVIDER=gemini
export GEMINI_API_KEY=your_key_here
# Option 4: vLLM (local inference)
export AI_PROVIDER=vllm
export VLLM_BASE_URL=http://localhost:8000/v1
export VLLM_MODEL=ibm-granite/granite-4.0-h-1b
# Then run
python app.py
No API keys? No problem! FocusFlow automatically uses a Mock AI agent with predefined responses for testing.
For Hackathon Organizers (HuggingFace Spaces)
To enable AI features on demo deployments, set demo API keys as environment variables:
DEMO_ANTHROPIC_API_KEY=sk-ant-xxx # Checked first, falls back to user keys
DEMO_OPENAI_API_KEY=sk-xxx # Same fallback logic
DEMO_GEMINI_API_KEY=xxx # Same fallback logic
If demo keys run out of credits, FocusFlow gracefully falls back to Mock AI mode automatically.
๐ Connecting to Claude Desktop (MCP)
Step 1: Start FocusFlow
python app.py
You'll see:
๐ MCP Server enabled! Connect via Claude Desktop or other MCP clients.
* Streamable HTTP URL: http://localhost:5000/gradio_api/mcp/
๐ Connecting via MCP (Claude Desktop / Windows)
FocusFlow runs an MCP server that you can connect to from external tools like Claude Desktop or LM Studio.
If running on WSL and connecting from Windows:
- Ensure the app is running:
python app.py(it listens on0.0.0.0by default). - Find your WSL IP address: Run
wsl hostname -Iin your terminal. - Configure your MCP Client:
- Type: SSE (Server-Sent Events)
- URL:
http://<YOUR_WSL_IP>:5000/gradio_api/mcp/sse - (Or try
http://localhost:5000/gradio_api/mcp/sseif localhost forwarding is working)
Available MCP Tools:
get_active_task: Get the currently active task.add_task: Create a new task.update_task: Update task status or details.get_productivity_stats: Get focus scores and metrics.
Configuration (mcp.json / claude_desktop_config.json):
{
"mcpServers": {
"focusflow": {
"url": "http://<YOUR_WSL_IP>:5000/gradio_api/mcp/sse"
}
}
}
Replace <YOUR_WSL_IP> with the IP address from step 2 (e.g., 172.x.x.x).
Step 2: Configure Claude Desktop
macOS
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"focusflow": {
"command": "python",
"args": ["/absolute/path/to/your/focusflow/app.py"]
}
}
}
Windows
Edit %APPDATA%\Claude\claude_desktop_config.json with the same JSON format.
Step 3: Restart Claude Desktop
Close and reopen Claude Desktop. You should see FocusFlow tools available!
Step 4: Test It Out
In Claude Desktop, try:
"Add a task to build a REST API with authentication"
"What task should I work on now?"
"Mark task 1 as done"
"Show me my productivity stats"
๐ Recent Updates
Version 1.0 (Hackathon Release)
- โ MCP Integration - Full Model Context Protocol support with 8 tools and 3 resources
- โ Voice Feedback - ElevenLabs integration for Duolingo-style audio nudges
- โ
Demo API Keys - Support for
DEMO_ANTHROPIC_API_KEY,DEMO_OPENAI_API_KEY, andDEMO_ELEVEN_API_KEYfor hackathon deployments - โ Productivity Dashboard - Focus scores, streaks, weekly trends, and state distribution charts
- โ Mock AI Mode - Works without API keys for testing and demos
- โ Graceful Degradation - Automatically falls back to Mock AI if API keys are invalid or out of credits
- โ Dual Launch Modes - Demo mode (text area) and Local mode (file monitoring)
- โ
Comprehensive Testing - Full testing checklist in
TESTING_CHECKLIST.md - โ Error Handling - Invalid task IDs return helpful error messages in MCP tools
- โ
Metrics Integration - MCP
get_productivity_stats()includes focus scores and streaks
โ๏ธ Configuration & Environment Variables
Core Settings
| Variable | Default | Options | Description |
|---|---|---|---|
LAUNCH_MODE |
demo |
demo, local |
Workspace monitoring mode (see Launch Modes below) |
AI_PROVIDER |
anthropic |
openai, anthropic, vllm, mock |
AI provider to use |
MONITOR_INTERVAL |
30 |
Any integer | Seconds between automatic focus checks |
ENABLE_MCP |
true |
true, false |
Enable/disable MCP server |
AI Provider API Keys
Priority Order: Demo keys are checked first, then user keys, then falls back to Mock AI.
User API Keys
| Variable | Description | Get Key From |
|---|---|---|
OPENAI_API_KEY |
Your personal OpenAI API key | https://platform.openai.com/api-keys |
ANTHROPIC_API_KEY |
Your personal Anthropic API key | https://console.anthropic.com/ |
ELEVEN_API_KEY |
Your personal ElevenLabs API key (optional) | https://elevenlabs.io/api |
LINEAR_API_KEY |
Your personal Linear API key (optional) | https://linear.app/settings/api |
Demo API Keys (For Hackathon Organizers)
| Variable | Description | Use Case |
|---|---|---|
DEMO_ANTHROPIC_API_KEY |
Shared Anthropic key for demos | Set on HuggingFace Spaces for judges/testers |
DEMO_OPENAI_API_KEY |
Shared OpenAI key for demos | Set on HuggingFace Spaces for judges/testers |
DEMO_ELEVEN_API_KEY |
Shared ElevenLabs key for voice | Set on HuggingFace Spaces for voice feedback |
How It Works:
# Priority chain (Anthropic example):
1. Check DEMO_ANTHROPIC_API_KEY (hackathon demo key)
2. If not found, check ANTHROPIC_API_KEY (user's personal key)
3. If not found or invalid, fall back to Mock AI (no errors!)
# Voice integration (optional):
1. Check DEMO_ELEVEN_API_KEY (hackathon demo key)
2. If not found, check ELEVEN_API_KEY (user's personal key)
3. If not found, gracefully disable voice (text-only mode)
vLLM Settings (Local Inference)
| Variable | Default | Description |
|---|---|---|
VLLM_BASE_URL |
http://localhost:8000/v1 |
vLLM server endpoint |
VLLM_MODEL |
ibm-granite/granite-4.0-h-1b |
Model name |
VLLM_API_KEY |
EMPTY |
API key (usually not needed for local) |
API Key Management Best Practices
For Local Development:
# Copy the example file
cp .env.example .env
# Edit .env and add your personal keys
nano .env # or your preferred editor
For HuggingFace Spaces Deployment:
# In Space Settings > Variables, add:
LAUNCH_MODE=demo
AI_PROVIDER=anthropic
DEMO_ANTHROPIC_API_KEY=sk-ant-your-hackathon-key
For Testing Without API Keys:
# Just run - Mock AI activates automatically!
python app.py
# Status: "โน๏ธ Running in DEMO MODE with Mock AI (no API keys needed). Perfect for testing! ๐ญ"
Graceful Degradation
FocusFlow never crashes due to missing or invalid API keys:
| Scenario | Behavior | User Experience |
|---|---|---|
| No API keys set | Uses Mock AI | โ Full demo functionality |
| Invalid API key | Falls back to Mock AI | โ App continues working |
| API out of credits | Falls back to Mock AI | โ Seamless transition |
| API rate limited | Retries, then Mock AI | โ No interruption |
Status Messages:
โ Anthropic Claude initialized successfully (demo key)- Demo key workingโ OpenAI GPT-4 initialized successfully (user key)- User key workingโน๏ธ Running in DEMO MODE with Mock AI (no API keys needed)- Fallback active
๐ฎ Launch Modes Explained
FocusFlow supports two workspace monitoring modes:
Demo Mode (LAUNCH_MODE=demo)
Best for:
- HuggingFace Spaces deployments
- Replit deployments
- Testing without file system access
- Hackathon demos for judges
How it works:
- Provides a text area for simulating workspace activity
- Users type what they're working on
- AI analyzes text content for focus checks
- No file system permissions needed
Example:
export LAUNCH_MODE=demo
python app.py
# Monitor tab shows: "Demo Workspace" text area
User Workflow:
- Type: "Working on authentication API, creating login endpoint"
- Click "Check Focus Now"
- Result: "On Track! Great work! ๐ฏ"
Local Mode (LAUNCH_MODE=local)
Best for:
- Local development environments
- Real-time file monitoring
- Production use cases
- Personal productivity tracking
How it works:
- Uses
watchdoglibrary to monitor file system changes - Automatically detects file modifications in project directory
- Reads actual file diffs for intelligent analysis
- Triggers focus checks automatically when files change
Example:
export LAUNCH_MODE=local
python app.py
# Monitor tab shows: "Watching directory: /your/project/path"
User Workflow:
- Start a task in Task Manager
- Edit files in your project
- FocusFlow automatically detects changes and runs focus checks
- Receive real-time feedback
Choosing the Right Mode
| Use Case | Recommended Mode | Reason |
|---|---|---|
| HuggingFace Spaces | demo |
No file system access in web deployments |
| Hackathon demo | demo |
Easy for judges to test without setup |
| Local development | local |
Real-time file monitoring is more natural |
| Replit | demo |
Simpler, no file permissions issues |
| Personal productivity | local |
Authentic workspace monitoring |
๐ Project Structure
focusflow/
โโโ app.py # Main Gradio application
โโโ agent.py # AI focus agent (OpenAI/Anthropic/Mock)
โโโ storage.py # Task manager with SQLite
โโโ monitor.py # File monitoring with watchdog
โโโ metrics.py # Productivity tracking
โโโ mcp_tools.py # MCP tools and resources
โโโ requirements.txt # Python dependencies
โโโ .env.example # Environment template
โโโ README.md # This file
๐ฅ Demo & Screenshots
Home Screen
Clean interface with feature overview and configuration status.
Onboarding
AI generates micro-tasks from project descriptions.
Task Manager
Kanban-style task board with drag-and-drop (coming soon).
Dashboard
Visualize focus scores, streaks, and productivity trends.
Monitor
Real-time focus checks with Duolingo-style feedback.
๐ Why This is Perfect for the Gradio MCP Hackathon
- Novel MCP Use Case: First MCP-powered productivity/accountability tool
- Deep Integration: Natural language task management through Claude Desktop
- Real Problem: Solves actual pain points for developers with ADHD
- Gradio Showcase: Uses tabs, plots, timers, state management, custom JS
- Demo-Friendly: Works without API keys, deployable to HF Spaces
- Production-Ready: SQLite persistence, metrics tracking, error handling
๐งช Testing
Quick Feature Test (5 minutes)
Use the comprehensive TESTING_CHECKLIST.md file for detailed testing instructions. Here's a quick verification:
# 1. Start the app
python app.py
# 2. Open browser
open http://localhost:5000
# 3. Test each tab:
# โ
Home: Check status message shows AI provider
# โ
Onboarding: Generate tasks from project description
# โ
Task Manager: Create/edit/delete/start tasks
# โ
Monitor: Perform focus checks (demo workspace or file changes)
# โ
Dashboard: View metrics after focus checks
# โ
Pomodoro: Start/pause/reset timer
Test Scenarios
Scenario 1: Demo Mode (No API Keys)
# No .env file needed
python app.py
# Expected: "โน๏ธ Running in DEMO MODE with Mock AI"
# Test: All features work with predefined responses
Scenario 2: With Anthropic API Key
export AI_PROVIDER=anthropic
export ANTHROPIC_API_KEY=sk-ant-your-key
python app.py
# Expected: "โ
Anthropic Claude initialized successfully (user key)"
# Test: Intelligent task generation and focus analysis
Scenario 3: Demo Key (Hackathon Deployment)
export AI_PROVIDER=anthropic
export DEMO_ANTHROPIC_API_KEY=sk-ant-demo-key
python app.py
# Expected: "โ
Anthropic Claude initialized successfully (demo key)"
# Test: Uses demo key, falls back to Mock if exhausted
Scenario 4: MCP Integration
# 1. Configure Claude Desktop (see MCP section above)
# 2. Start FocusFlow
python app.py
# 3. In Claude Desktop, test tools:
# - "Add a task to implement OAuth2"
# - "What's my current task?"
# - "Show my productivity stats"
Automated Testing
Run the full test suite:
# Follow TESTING_CHECKLIST.md step-by-step
# Expected: All features pass without errors
# Time: ~15 minutes for comprehensive test
Common Issues & Solutions
| Issue | Solution |
|---|---|
| "No API key" error | Set AI_PROVIDER=mock or leave keys empty - Mock AI activates automatically |
| Charts show "Infinite extent" warning | Normal on first load with no data - warnings disappear after focus checks |
| MCP tools not visible in Claude | Restart Claude Desktop after config changes |
| File monitoring not working | Check LAUNCH_MODE=local and file permissions |
| Tasks not persisting | Check focusflow.db file exists and is writable |
๐ Deployment
Deployment Option 1: HuggingFace Spaces (Recommended for Hackathon)
Step 1: Create Space
- Go to https://huggingface.co/spaces
- Click "Create new Space"
- Select "Gradio" as SDK
- Choose a name (e.g.,
focusflow-demo)
Step 2: Upload Files Upload these files to your Space:
app.pyagent.pystorage.pymonitor.pymetrics.pymcp_tools.pyrequirements.txtREADME.md.env.example(optional, for documentation)
Step 3: Configure Environment Variables
In Space Settings โ Variables, add:
Option A: With Demo AI (Recommended for Judges)
LAUNCH_MODE=demo
AI_PROVIDER=anthropic
DEMO_ANTHROPIC_API_KEY=sk-ant-your-hackathon-shared-key
MONITOR_INTERVAL=30
ENABLE_MCP=true
Option B: Mock AI Only (No API Keys Needed)
LAUNCH_MODE=demo
AI_PROVIDER=mock
MONITOR_INTERVAL=30
ENABLE_MCP=true
Step 4: Deploy
- Click "Save" - Space will automatically rebuild and deploy
- Your app will be live at:
https://huggingface.co/spaces/yourusername/focusflow-demo - MCP server endpoint:
https://yourusername-focusflow-demo.hf.space/gradio_api/mcp/
Step 5: Test Deployment
- Open the Space URL
- Test onboarding โ Generate tasks
- Test task manager โ CRUD operations
- Test monitor โ Focus checks
- Test dashboard โ View metrics
- Test MCP (optional) โ Connect from Claude Desktop
Deployment Option 2: Replit
Step 1: Import Repository
- Go to https://replit.com
- Click "Create Repl" โ "Import from GitHub"
- Paste your FocusFlow repository URL
Step 2: Configure Secrets In Replit Secrets (Tools โ Secrets):
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=your_key_here
LAUNCH_MODE=demo
Step 3: Run
python app.py
Step 4: Share
- Click "Share" button
- Copy the public URL
- MCP endpoint:
https://yourrepl.repl.co/gradio_api/mcp/
Deployment Option 3: Local Development
Step 1: Clone Repository
git clone https://github.com/yourusername/focusflow.git
cd focusflow
Step 2: Install Dependencies
pip install -r requirements.txt
Step 3: Configure Environment
# Copy example
cp .env.example .env
# Edit .env with your settings
nano .env # or code .env, vim .env, etc.
Step 4: Run
# For local file monitoring
export LAUNCH_MODE=local
python app.py
# For demo mode (text area)
export LAUNCH_MODE=demo
python app.py
Step 5: Access
- Web UI: http://localhost:5000
- MCP endpoint: http://localhost:5000/gradio_api/mcp/
Deployment Option 4: Docker (Advanced)
Create Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
ENV LAUNCH_MODE=demo
ENV AI_PROVIDER=mock
ENV ENABLE_MCP=true
EXPOSE 5000
CMD ["python", "app.py"]
Build and run:
docker build -t focusflow .
docker run -p 5000:5000 -e ANTHROPIC_API_KEY=your_key focusflow
Post-Deployment Checklist
After deploying to any platform:
- App loads without errors
- Home tab shows correct AI provider status
- Onboarding generates tasks
- Task Manager CRUD operations work
- Monitor tab performs focus checks
- Dashboard displays metrics (after checks)
- Pomodoro timer functions
- MCP endpoint accessible (optional)
- No sensitive data exposed in logs
- Database file (
focusflow.db) created successfully
Environment Variables Reference (Deployment)
Required:
LAUNCH_MODE- Always set todemofor web deployments
Optional:
AI_PROVIDER-anthropic,openai,vllm, ormock(default:anthropic)DEMO_ANTHROPIC_API_KEY- For hackathon/shared deploymentsDEMO_OPENAI_API_KEY- Alternative demo providerANTHROPIC_API_KEY- User's personal keyOPENAI_API_KEY- User's personal keyMONITOR_INTERVAL- Seconds between checks (default: 30)ENABLE_MCP- Enable MCP server (default: true)
Deployment Troubleshooting
| Issue | Platform | Solution |
|---|---|---|
| Import errors | HF Spaces | Check requirements.txt includes all dependencies |
| "Port already in use" | Local | Change port in app.py or kill process using port 5000 |
| MCP not accessible | All | Ensure ENABLE_MCP=true and check firewall settings |
| Database errors | HF Spaces | Ensure space has write permissions (SQLite needs filesystem) |
| Mock AI always active | All | Check environment variables are set correctly |
| Slow performance | HF Spaces | Free tier has limited resources - consider upgrading |
๐ ๏ธ Tech Stack
- Frontend: Gradio 5.0+ (Python UI framework)
- Backend: Python 3.11+
- Database: SQLite (zero-config persistence)
- AI Providers: OpenAI GPT-4, Anthropic Claude, Google Gemini, vLLM, or Mock
- Voice: ElevenLabs text-to-speech (optional)
- File Monitoring: Watchdog (real-time filesystem events)
- MCP Integration: Model Context Protocol for LLM interoperability
- Charts: Gradio native plots (pandas DataFrames)
๐ Roadmap
- Mobile app (React Native + Gradio backend)
- GitHub integration (auto-detect tasks from issues)
- Slack/Discord notifications
- Team mode (shared accountability)
- Voice commands (Whisper integration)
- VS Code extension
๐ค Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
๐ License
MIT License - feel free to use this for personal or commercial projects!
๐ Acknowledgments
- Built for the Gradio MCP Hackathon - competing for Track 1: Building MCP
- Voice integration powered by ElevenLabs - competing for $2,000 Best Use of ElevenLabs sponsor award
- Inspired by Duolingo's encouraging UX
- Uses Model Context Protocol for LLM integration
Made with โค๏ธ for developers with ADHD who just need a little nudge to stay focused.
Hoot hoot! ๐ฆ