Pagination
Offset/limit pagination, bounded, consistent across every list endpoint.
Purpose
Every list endpoint is bookmarkable and predictable. A user who copies a URL with ?offset=500&limit=30 gets the same results on another machine. Cursor pagination is rejected for this reason (see AgnosticUI.md §12).
Contract reference: API.md §2.
Implementation
PaginationQueryDto
// src/common/dto/pagination-query.dto.ts
import { Type } from 'class-transformer';
import { IsInt, Max, Min } from 'class-validator';
export class PaginationQueryDto {
@Type(() => Number) @IsInt() @Min(0) @Max(10000)
offset: number = 0;
@Type(() => Number) @IsInt() @Min(1) @Max(200)
limit: number = 30;
}
Other query DTOs extend this:
export class ContentSearchQueryDto extends PaginationQueryDto {
@IsOptional() @IsString() @MinLength(2) @MaxLength(200) q?: string;
@IsOptional() @IsString() group?: string;
// ...
}
Bounds come from config (PAGINATION_MAX_LIMIT, PAGINATION_MAX_OFFSET) if overrides are needed in the future; for now they're compile-time constants.
Service helper
// src/common/dto/paginated.dto.ts
export const paginated = <T>(items: T[], total: number, q: PaginationQueryDto): Paginated<T> => ({
items,
pagination: {
offset: q.offset,
limit: q.limit,
total,
hasMore: q.offset + items.length < total,
},
});
Every list service:
async search(q: ContentSearchQueryDto): Promise<Paginated<ContentDto>> {
const filter = buildFilter(q);
const [items, total] = await Promise.all([
this.model.find(filter).sort(this.sort(q)).skip(q.offset).limit(q.limit).lean(),
this.model.countDocuments(filter),
]);
return paginated(items.map(toDto), total, q);
}
Promise.all runs the page query and the count in parallel. In practice the count is usually slower; the combined wait is the count's.
Sort stability
_id is always the final tiebreaker so offset pagination is stable across concurrent writes. The service layer enforces this:
private sort(q: ContentSearchQueryDto): Record<string, 1 | -1> {
const base = SORT_MAP[q.sort ?? 'newest'];
return { ...base, _id: -1 };
}
Without the tiebreaker, two documents with equal sort keys could swap between pages.
Count strategy
- Default:
countDocuments(filter)— accurate, scans indexed filter. - Large unfiltered: optional
?estimate=trueusesestimatedDocumentCount()(constant-time but approximate). total: -1is reserved for "count unavailable"; avoid returning this unlessestimate=true.
Required variables and services
- None directly. Uses the global
ValidationPipeconfigured inmain.ts(already in place).
Optional config for future overrides:
| Env | Default | Purpose |
|---|---|---|
PAGINATION_DEFAULT_LIMIT | 30 | Default limit when omitted |
PAGINATION_MAX_LIMIT | 200 | Upper bound |
PAGINATION_MAX_OFFSET | 10000 | Upper bound |
Gotchas
@Type(() => Number)is required forclass-transformerto coerce the query string'30'into the number30. Without it,@IsInt()fails on string inputs.- Count is not free. For filters not covered by an index,
countDocumentsis a collection scan. Every filter column used in search must be indexed (see TAXONOMY.md §7). - Deep offsets are slow even with an index — Mongo still walks N entries to reach offset N. The 10K cap is a rate limiter on this; beyond that, we deliberately refuse the query.
- Jumpy pagination under load. A new document matching the filter can shift everything one position, making the last item on page K appear again as the first item on page K+1. That's the tradeoff of offset/limit; accept it for v1.
Testing
- Unit:
paginated— givenitems,total,q, asserts the output shape includinghasMoremath. - Unit:
sort()helper — ensures every mapped sort includes_id. - Integration: seed 100 docs; request
offset=0 limit=30, thenoffset=30 limit=30, thenoffset=99 limit=30; assert the last page hasitems.length === 1andhasMore === false. - Integration: request
offset=-1, assert 400pagination.invalid. - Integration: request
limit=999, assert 400. - Integration (stability): seed 5 docs with identical
approvedAt; walk all pages of size 2; assert each document appears exactly once across pages.