Optimising MongoDB Indexes: The Hidden Cost of Poor Index Design

We recently encountered a MongoDB performance issue that I wanted to share as a cautionary tale for database architects.

When designing MongoDB collections, index selection is critical – yet often overlooked until problems emerge. Here’s what our team discovered:

A collection with 10 million documents was experiencing query slowdowns. Investigation revealed two major issues:

Over-indexing: We had created 12 separate indexes, consuming 15GB of RAM. Reducing to 5 strategic compound indexes cut memory usage by 60% and improved write performance by 2.8x.

Indexing frequently-updated fields: One index included a “lastUpdated” timestamp modified in 70% of operations. Removing this field from the index reduced write latency from 85ms to 16ms – a 5.3x improvement!

Numbers don’t lie:

  • Queries: Response time improved from 450ms to 38ms
  • Write operations: Throughput increased from 2,100 to 5,800 ops/sec
  • Index maintenance overhead: Reduced from 22% to 6% of total CPU

Key lessons:

  • Monitor index size-to-collection ratio (aim for <30%)
  • Avoid indexing high-cardinality fields with frequent updates
  • Use compound indexes strategically rather than many single-field indexes
  • Regularly audit index usage stats with db.collection.aggregate([{$indexStats:{}}])

Comments

Leave a comment