Case Study · Real Money Fantasy Sports

My11Cricket — Fantasy Cricket.
Built to Beat Dream11.

A full-stack real money fantasy cricket platform — real-time live score ingestion, point engine that updates 2,00,000 concurrent users mid-match, mega contest prize pool distribution, and a Dream11-grade experience built from scratch on React Native CLI.

Industry
Real Money Fantasy Sports
Platform
Android
Timeline
TBD
Our Role
End-to-end
Year
TBD

What is My11Cricket?

My11Cricket is a real money fantasy cricket platform — a full Dream11-grade product built from scratch. Users pick an 11-player cricket team before a match, join mega contests with large prize pools, and watch their team's points update in real time as the actual match unfolds ball by ball.

The engineering challenge isn't the team builder — it's what happens after the match starts. Live score data ingested from a third-party API, transformed into fantasy points, broadcast to 2,00,000 concurrent users updating leaderboards in real time, and settled into prize payouts the moment the match ends. We built every layer of it.

Client Type
B2C India — Real Money Fantasy Sports
Target Users
18–40, Cricket-first Android users
Business Model
Platform cut from contest entry fees
Scope
Full product — design to deployment

The hard part isn't the team builder. It's match day.

Fantasy cricket apps look identical in screenshots. The difference between a platform that survives and one that collapses is what happens the moment 2,00,000 users open the app simultaneously when India vs Pakistan starts. Most platforms find out too late.

The Problems
  • Real-time point updates for 2,00,000 concurrent users during live matches — most backends weren't designed for this write and broadcast load
  • Third-party score API latency and downtime created stale leaderboards — users saw wrong point totals mid-match, destroying trust
  • Mega contest prize distribution required splitting large pools across thousands of ranked winners with no payout errors
  • Team lock enforcement — users attempting to edit teams after match toss required server-side cutoff, not UI-level blocking
  • Leaderboard ranking at 2,00,000 user scale needed to update in near real-time without full table recomputation on every score event
Our Solution
  • Score ingestion pipeline with API polling + webhook fallback — stale data detection triggers secondary source switch automatically
  • Point computation engine decoupled from broadcast layer — scores computed once, fan-out to all subscribed users via Redis pub/sub
  • Mega contest prize distribution engine with tiered payout rules — pool settled and payouts credited within 10 minutes of match end
  • Team lock enforced at API layer on match toss event — edit attempts rejected server-side regardless of client state
  • Incremental leaderboard updates using Redis sorted sets — rank recomputed in O(log N) per score event, not full table scan

Built to feel premium at every screen.

Every screen was designed with the end-user in mind — someone on the move, often distracted, who needs information fast and decisions to be obvious.

Everything Dream11 has. Built by us.

Six core systems — each one had to be production-grade before a single match went live. There's no graceful degradation in real money fantasy sports.

01
Fantasy Team Builder

11-player team selection with credit-based player pricing, captain/vice-captain multipliers (2x/1.5x), and role constraints (batsmen, bowlers, all-rounders, wicket-keepers). Multiple teams per contest supported.

02
Live Score Ingestion Engine

Real-time score data pulled from SportRadar/CricAPI with sub-30-second latency. Automatic failover between sources on API downtime. Raw score events transformed into fantasy points by a custom rules engine.

03
Real-Time Point & Rank Engine

Fantasy points computed on every ball event and broadcast to 2,00,000 concurrent users via WebSocket. Leaderboard ranks updated using Redis sorted sets — O(log N) per update, never a full recompute.

04
Mega Contest Engine

Large-pool contests with thousands of participants and tiered prize structures. Entry fee collection, pool formation, and prize distribution all handled atomically — payouts credited within 10 minutes of match completion.

05
Wallet & Prize Settlement

In-app wallet with UPI and card deposits. Winnings credited instantly post-match. Withdrawal processing with KYC verification, TDS deduction compliance, and fraud velocity checks.

06
Operator Admin Panel

Match scheduling, contest creation, prize pool configuration, live match monitoring, user KYC management, payout approvals, and platform revenue dashboard — built for day-to-day operations.

The numbers.

Real across the board.

300K+
Total app downloads
200K+
Peak concurrent users during live matches
45%
30-day user retention
$100K+
Revenue generated
<30sec
Live score-to-point update latency
<10min
Prize settlement after match end

Stack chosen for match-day load.

React Native CLI over Expo — live score animations and real-time leaderboard updates needed raw native performance. Everything else was chosen to handle the traffic spike when a major match starts.

Frontend
React Native (CLI)
Language
TypeScript
State
Redux Toolkit + RTK Query
Animations
Reanimated 3 + Lottie
Backend
Node.js + Express
Real-Time
Socket.IO (clustered)
Score API
SportRadar + CricAPI (failover)
Database
PostgreSQL + Redis
Queue
BullMQ (point computation jobs)
Auth
JWT + OTP + KYC (Digilocker)
Payments
Razorpay
Hosting
AWS EC2 (Auto Scaling) + S3

Architecture built for match-day traffic spikes.

Fantasy sports has the most brutal traffic pattern in consumer apps — near-zero load between matches, then 2,00,000 simultaneous connections the moment a match starts. The architecture had to handle both states without over-provisioning for the spike or under-provisioning for the peak.

Score Ingestion & Point Engine
  • Primary score feed from SportRadar via webhook push — ball-by-ball events received, validated, and queued in BullMQ for point computation
  • CricAPI polling as secondary source — activated automatically if SportRadar webhook gap exceeds 45 seconds, ensuring no stale leaderboard during API outages
  • Point computation workers process score events from queue — fantasy points calculated per player per event using configurable scoring rules (6=+6pts, wicket=+25pts, etc.)
  • Computed points written to Redis player score hash — single source of truth for all leaderboard queries, never read from PostgreSQL during live match
Real-Time Leaderboard at 2L Concurrent Users
  • Each contest leaderboard backed by a Redis sorted set — score update is a single ZADD operation, rank query is ZRANK, both O(log N)
  • User WebSocket connections subscribed to contest-scoped channels — point updates fan-out via Redis pub/sub, not individual pushes
  • Leaderboard pagination served from Redis with cursor-based navigation — top 100 always live, user's own rank always live, middle pages on-demand
  • Auto Scaling group triggers at 60% CPU — new EC2 instances join Socket.IO cluster via Redis adapter within 3 minutes of spike detection
Contest & Prize Settlement
  • Match lock event (toss) triggers server-side team freeze — all subsequent edit API calls return 423 Locked regardless of client state
  • Prize pool formation: entry fees collected into escrow wallet, platform cut deducted at contest close, remaining pool distributed per tiered prize structure
  • Settlement job triggered by match COMPLETED event — ranks finalized from Redis sorted set, prize amounts computed, wallet credits executed in batched atomic transactions
  • TDS deduction (30% on net winnings above ₹10,000) computed and recorded per settlement — compliance ledger maintained separately for tax reporting

Four problems that only appear when 2,00,000 people watch the same match.

Fantasy sports infrastructure looks fine in staging. It breaks in production during an India match. These are the four failures we anticipated and engineered around before go-live.

Challenges We Faced
  • 01Score API downtime during live matches — primary source going dark mid-game left leaderboards frozen, users assumed the platform was broken
  • 02Leaderboard computation at 2,00,000 concurrent users — naive SQL rank queries caused database CPU to spike to 100% on every score event during load testing
  • 03Prize settlement disputes — users with identical fantasy points needed deterministic tiebreaker logic, otherwise same-rank users received different prize amounts
  • 04Match-day traffic spike from near-zero to 2,00,000 concurrent users in under 5 minutes — fixed infrastructure couldn't absorb the ramp rate
How We Solved Them
  • Dual-source score ingestion with automatic failover — platform detects primary API silence and switches to polling backup in under 45 seconds. Users never see a frozen leaderboard
  • Entire leaderboard layer moved to Redis sorted sets — SQL never queried during live match. ZADD on score update, ZRANK for user position. Database load during match-day reduced by 94%
  • Tiebreaker resolved by team submission timestamp — earlier submission wins on equal points. Rule disclosed in contest terms, computed deterministically at settlement, no disputes
  • AWS Auto Scaling with pre-warm trigger — 30 minutes before scheduled match start, scaling policy activates proactively. Instances ready before the spike, not after

Dream11 has 200 engineers. We built the same core product with a small team — because we made the right architecture decisions upfront. Redis sorted sets for leaderboards, dual-source score ingestion, pre-warmed auto scaling. None of these are clever tricks. They're just the right decisions made before the first match went live.

Abhinav
Founder, 1000xcodes · Architect, My11Cricket
Building a fantasy sports platform?

Live scoring, real-time leaderboards, prize settlement, and match-day traffic spikes — this is a category where the wrong infrastructure decisions surface on day one of your first live match. Let's get them right.

Book a Free Call →