Why I Chose Cloudflare Workers Over Firebase for My iOS App Backend
- Mar 27
- 3 min read
When I needed a backend for a cycling club iOS app, the obvious choices were Firebase, AWS, or Supabase. I chose none of them. Here is why Cloudflare Workers was the right call — and what the architecture looks like in production.
The Requirements
The DCC Weekly Activities app needed a backend that could: fetch data from the Strava API, aggregate per-member stats for the current week, cache results for fast mobile responses, handle OAuth token refresh server-side (keeping secrets out of the iOS binary), and serve both JSON API and an interactive HTML feature dashboard. Traffic: ~50 active users, but response time matters — nobody wants to wait 3 seconds for a leaderboard.
Why Not Firebase, AWS, or Supabase?
Firebase — Firestore pricing is unpredictable. Vendor lock-in is heavy. For an aggregation pipeline (not CRUD), Firestore adds complexity without benefit.
AWS Lambda — Overkill. IAM policies, API Gateway config, cold starts. For a club app, this is like hiring a crane to hang a picture.
Supabase — Excellent for CRUD apps with PostgreSQL. But this is a data aggregation pipeline, not a database-backed app. The Worker fetches, transforms, and caches — no persistent storage needed beyond KV.
Cloudflare Workers won on every dimension: runs at the edge (sub-50ms cached responses), KV storage included, generous free tier (100K requests/day), deploys in under 10 seconds, and the entire backend fits in one JavaScript file.
The Architecture
One file — cloudflare-club-data-worker.js — handles everything. Four endpoints:
GET /club-data — fetches Strava club activities, aggregates per-member stats (distance, elevation, speed, ride count), returns JSON. Supports weekOffset for historical weeks.
POST /feature-request — receives in-app feature requests and creates Jira issues in the SCRUM project via the Jira REST API.
GET /features — queries Jira for all SCRUM issues, maps statuses to Live/Planned/Shelved, and renders a full interactive HTML dashboard.
GET /features-api — same data as /features but returns raw JSON for programmatic access.
Token Proxy Pattern
The Strava client secret and refresh token live as Cloudflare environment secrets — they never touch the iOS binary. The Worker handles the full OAuth token refresh lifecycle: check KV for a cached access token, refresh if expired (with 5-minute buffer), store the new tokens back to KV, and retry. The iOS app simply calls /club-data with no auth headers. This is critical for App Store compliance and security.
KV Caching Strategy
Cache key: ISO week date string (e.g., "2026-03-16" for the Monday). TTL: 1 hour. A cron trigger runs every hour to pre-warm the cache for the current week. Result: the first request after a cache miss takes ~2 seconds (Strava API round-trip), but all subsequent requests return in under 50ms from the nearest Cloudflare edge location. For a club checking their stats throughout the week, this means instant responses.
The Feature Dashboard Trick
The most satisfying endpoint is /features. A single Worker serves both the iOS app (JSON) and the portfolio website (HTML). It queries Jira, maps issue labels to categories (auth, ride, leaderboard, etc.), maps statuses to Live/Planned/Shelved, merges with 35 hardcoded app features as a baseline, and renders a searchable, filterable HTML dashboard — all at the edge. The Wix portfolio site embeds it via iframe, so it is always up to date with zero manual maintenance.
See it live: DCC Feature Dashboard
What I Would Change
If I started over, I would add Durable Objects for real-time WebSocket push notifications — so the app updates instantly when new rides are posted instead of polling. I would also split the Worker into separate modules (auth, data, features) for unit testability. But for a club-sized app, the single-file approach is refreshingly simple. It ships fast, deploys in seconds, and the entire backend costs exactly zero dollars on the Cloudflare free tier.










Comments