Back to blog
MySQLPerformanceBackend

How I Optimized MySQL Queries Serving 1M+ Users

Lessons from redesigning N+1 query patterns, adding proper indexing, and batch-loading strategies that reduced API response times by 10-30% at EzyCourse.

March 15, 2025
6 min read

When you're serving 1M+ students across 7,000+ institutions on a SaaS platform, database performance stops being a luxury — it becomes the difference between a product that scales and one that buckles under its own weight. At EzyCourse, I spent weeks hunting down slow queries, and this post shares the patterns I repeatedly found and fixed.

The N+1 Query Problem

The single biggest offender I encountered was the classic N+1 query pattern. Imagine loading a list of courses, then for each course fetching the instructor, the enrolled students count, and the latest review. If you have 50 courses, that's 1 + (50 × 3) = 151 database round trips. Each round trip costs network latency — typically 1-5ms — so 151 queries can easily hit 500ms+ before you even touch application logic.

The Fix: Batch Loading

Instead of fetching related data inside a loop, I restructured the queries to batch-load everything upfront. In AdonisJS, this means using preload() with proper relationship definitions. The result: 151 queries collapse into 4 — one for courses, one for all instructors via IN clause, one for enrollment counts via GROUP BY, and one for the latest reviews per course via a window function.

// Before: N+1 pattern
const courses = await Course.all()
for (const course of courses) {
  course.instructor = await User.find(course.instructorId)
  course.studentCount = await Enrollment.query().where('courseId', course.id).count()
}

// After: batch loading
const courses = await Course.query()
  .preload('instructor')
  .withCount('enrollments')
  .preload('latestReview')

Indexing: The Silent Multiplier

The second win came from proper indexing. I profiled our slowest queries using EXPLAIN and found several that were doing full table scans on tables with millions of rows. Adding composite indexes on (tenant_id, created_at) for multi-tenant filtering, and covering indexes for frequent SELECT patterns, reduced some queries from 800ms down to under 50ms.

Pagination with Cursor Instead of OFFSET

LIMIT/OFFSET pagination seems innocent until you paginate deeply. OFFSET 100000 forces MySQL to scan and discard 100,000 rows just to return the next 20. I switched large listing endpoints to cursor-based pagination using WHERE id > last_seen_id, which runs in constant time regardless of page depth.

The Result

These three changes — batch loading, proper indexing, and cursor pagination — reduced API response times by 10-30% across the board. More importantly, they made our performance predictable. A request that took 200ms today won't take 2 seconds next year when the dataset grows 10x.

The lesson I took away: the database is almost always where your scaling problems live, and profiling beats guessing every single time. EXPLAIN is your friend. Learn to read it.

Thanks for reading!

Got thoughts or questions? I'd love to hear from you.

Get in touch