Deployment¶
Overview¶
The application runs entirely in Docker and is exposed to the internet via a Cloudflare Tunnel. No open ports or public IP required on the host machine.
Prerequisites¶
- Docker and Docker Compose
- A Cloudflare account with a domain
cloudflaredCLI (optional, for initial tunnel setup)
Docker services¶
All services are defined in docker-compose.yml:
- postgres — PostgreSQL 16 database with a health check
- api — Express API server (Node 22 Alpine, multi-stage build, port 4000). Runs as non-root
nodeuser via an entrypoint script that fixes volume permissions withsu-exec - web — React SPA served by nginx (
stable-alpine-slim, port 80 internally, 3000 externally) - cloudflared — Cloudflare Tunnel client (enabled via
tunnelprofile)
Step-by-step¶
1. Prepare environment¶
Edit .env and set production values for:
- POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB
- DATABASE_URL (must match the postgres credentials above)
- JWT_SECRET — a strong random secret for signing auth tokens (required)
- GIT_TOKEN_SECRET — a strong random secret for encrypting git PATs at rest (required)
- WORKSPACE_PATH — root directory for cloned project repos (default: /workspace)
- LOG_DIR — directory for log files (default: ./logs, which resolves to /app/packages/api/logs/ in Docker)
- LOG_RETENTION_DAYS — days to keep log files before automatic deletion (default: 30)
- APP_URL — public URL of the app (e.g. https://coqu.aimost.pl). Required for Google OAuth redirect URIs to work correctly behind a reverse proxy or Cloudflare Tunnel.
2. Set up Cloudflare Tunnel¶
- Go to Cloudflare Zero Trust Dashboard → Networks → Tunnels
- Create a new tunnel (e.g.
coqu) - Copy the tunnel token
- Add it to
.env: - In the tunnel's Public Hostname settings, add:
- Hostname:
your-domain.com(or a subdomain) - Service:
http://web:80
3. Deploy¶
Database migrations are applied automatically on every container start (via prisma migrate deploy in the API entrypoint script). The entrypoint starts as root to fix volume ownership, then drops privileges to the node user via su-exec before running migrations and the server. No manual migration step is needed.
Useful commands¶
# View logs
docker compose logs -f
# View logs for a specific service
docker compose logs -f api
# Restart a service
docker compose restart api
# Stop everything
docker compose --profile tunnel down
# Rebuild and restart
docker compose --profile tunnel up --build -d
Without Cloudflare Tunnel¶
If you want to expose the app differently (e.g. behind a reverse proxy):
This starts postgres, api, and web. Web is available at http://localhost:3000.
Data persistence¶
Three named Docker volumes persist data across container restarts and rebuilds:
postgres_data— PostgreSQL database filesworkspace_data— cloned project repositories (mounted at/workspace)home_data— mounted at/home/nodeon the API container. Persists agent data (~/.claude/), the global environment file (~/.coqu/.env), npm global packages (~/.npm-global/), and other home-directory state. This is required because agent SDKs are installed globally vianpm install -gat runtime and their configuration lives in$HOME.
To reset all data:
Warning: -v deletes all volumes and all data (database, agent config, env file).
CI/CD¶
Production deployment¶
The workflow in .github/workflows/deploy-prod.yml automatically deploys on every push to main:
- Checks out the repository on a self-hosted runner
- Creates
.envfrom theENV_FILErepository secret - Stops the previous deployment
- Builds and starts all services (including Cloudflare Tunnel)
- Logs service status for debugging
Documentation deployment¶
The workflow in .github/workflows/docs.yml deploys documentation to GitHub Pages when files in docs/ or mkdocs.yml change on main. It can also be triggered manually via workflow_dispatch. The site is built with MkDocs Material.
Updating (manual)¶
Migrations are applied automatically on container start.