Our Product · Case Study
Avluz is a price intelligence platform built by TrueLeaf Tech, indexing nearly three million products across India's largest online retailers and surfacing structured price history, deal alerts, and category-level trend data to shoppers and analysts.
The name itself is a deliberate construction: a blend of value and visibility, the two things the product is designed to deliver. Shoppers see the real value of a deal — not just today's price, but the trajectory of that price over weeks and months. Analysts see the visibility into category-wide pricing dynamics that no individual retailer surfaces.
This case study describes our own product. Avluz is built, operated, and owned by TrueLeaf Tech. Every architectural decision, every technical trade-off, and every operational lesson described here is one we have lived through directly.
India's e-commerce market is enormous and pricing is fluid. The same product can vary by twenty percent or more across retailers on the same day, and that variation moves in patterns that are not obvious to individual shoppers comparing two open browser tabs. Sellers and analysts face an even harder version of the same problem: visibility across the long tail of the catalog is essentially impossible without continuous monitoring.
The product hypothesis was straightforward. If we could ingest, normalise, and store price observations for a large enough catalog with high enough fidelity, we could answer questions that nobody else was answering well: is this a good deal right now? Is this product about to be discounted? Which retailer typically has the best price for this category?
Avluz's foundation is a continuous catalog crawler that touches close to three million products on a rolling basis. The crawler is not a single monolithic process; it is a fleet of stateless workers managed through BullMQ, each responsible for a small slice of the catalog and reporting back into a central job queue. The architecture lets us scale horizontally on demand — when a retailer adds inventory or runs a promotional event, we scale the worker pool, finish the burst, and scale back down.
Each price observation is timestamped, normalised against currency and unit conventions, and written to a time-series structure that lets us reconstruct any product's full price history in milliseconds. The deduplication discipline is important: a product that appears on three retailers with three different SKU patterns has to be matched to a single canonical entity, or the price history becomes incoherent.
We landed on a hybrid storage approach after meaningful experimentation. Product metadata and the canonical entity model live in MongoDB, which gives us the flexible schema needed to handle the variety of retailer data formats without painful migrations. Time-series price observations live in a separate, optimised store designed for write-heavy append patterns and fast windowed reads. The choice was driven by the access patterns: most reads need a window of price history for a specific product, and that workload has different performance characteristics from the metadata workload.
The lessons we drew from this architecture later informed our thinking on database selection more generally.
The public-facing site is built on Next.js, with a deliberate emphasis on server-side rendering and structured data for search engine discoverability. The catalog of nearly three million products represents an enormous SEO surface area, and every product page needs to render quickly, be indexable by search engines, and stay current as prices change.
We invested heavily in incremental static regeneration patterns, edge caching, and a sitemap strategy designed to keep the entire catalog discoverable without overwhelming our origin infrastructure. The render budget for any product page is under 300 milliseconds at the edge for cached pages, and under 1.2 seconds for cold renders.
The architectural insight was that the catalog is too big to render eagerly and too valuable to render lazily. The answer was both, with careful boundaries between them.
Early prototypes attempted real-time price refresh on demand — when a user viewed a product, we'd re-crawl the price. This was unworkable at scale; it created unpredictable latency, hammered the retailers we crawled, and produced an unsustainable cost profile. We moved to a tiered refresh model: high-traffic products are refreshed more frequently, long-tail products less often, with the refresh tier itself being a function of recent observation patterns.
It would be technically possible to crawl more products than we do today. We have deliberately chosen not to, because the marginal product added to the catalog has lower data quality, higher operational cost, and lower user value than the next investment in improving the products we already have. Catalog quality has consistently won over catalog quantity in our optimisation decisions.
The early versions of Avluz treated all products identically. The current version recognises that price dynamics in electronics are different from price dynamics in groceries, which are different again from fashion. Category-specific models for "what counts as a good deal" produce noticeably better recommendations than a single global model. The cost is more engineering complexity; the value is more accurate signal for users.
The most useful lessons from building Avluz have been operational rather than algorithmic. Specifically:
Avluz is more than a product. It is also the proving ground for the engineering disciplines we apply on client engagements. The patterns we use for large-scale data ingestion, the architectural choices we recommend for catalog-style products, and the SEO infrastructure we set up for high-volume content sites — all of these are sharper because we have run them ourselves at production scale, every day, for years.
If you are building something with similar characteristics — a large catalog, continuous data ingestion, heavy SEO requirements, or a need for fast windowed analytics over event data — the lessons from Avluz are directly applicable. Get in touch if you'd like to discuss how they might apply to what you're building.
Work with us
If something in this case study resonates with what you're trying to build — or if you'd like to talk through a related problem — we'd be glad to spend a half-hour helping you think it through.
Start a conversation →Avluz is a constructed name combining the concepts of value and visibility — the two things the product is designed to deliver. Shoppers see real value through complete price history rather than a single point-in-time number. Analysts see visibility into category-wide pricing dynamics that no individual retailer surfaces.
The platform indexes close to three million products across India's major online retailers, with continuous refresh cycles that scale based on traffic patterns and price volatility for each product.
Avluz runs on Node.js services for crawling and orchestration via BullMQ, MongoDB for product metadata, a specialised time-series store for price observations, and Next.js for the public-facing application. The deployment runs on a combination of self-managed infrastructure and edge CDN for global performance.
Yes. The underlying patterns — continuous catalog ingestion, time-series observation storage, SEO-aware rendering of large catalogs, and category-aware analytics — apply directly to any product that needs to monitor and surface trends over a large data surface. We have used variations of this architecture on several client engagements.
More from the TrueLeaf Tech engineering portfolio.
Let's build
Whether you're testing a hypothesis or scaling an established product, we'd be glad to spend a half-hour helping you think through the next step.