Cursor pagination keeps lists stable when data changes. Learn why offset paging breaks with inserts and deletes, and how to implement clean cursors.

You open a feed, scroll a bit, and everything feels normal until it doesn’t. You see the same item twice. Something you swear was there is missing. A row you were about to tap shifts down, and you land on the wrong detail page.
These are user-visible bugs, even if your API responses look “correct” in isolation. The usual symptoms are easy to spot:
This gets worse on mobile. People pause, switch apps, lose connectivity, then continue later. During that time, new items arrive, old ones are deleted, and some get edited. If your app keeps asking for “page 3” using an offset, page boundaries can shift while the user is mid-scroll. The result is a feed that feels unstable and untrustworthy.
The goal is simple: once a user starts scrolling forward, the list should behave like a snapshot. New items can exist, but they shouldn’t reshuffle what the user is already paging through. The user should get a smooth, predictable sequence.
No pagination method is perfect. Real systems have concurrent writes, edits, and multiple sort options. But cursor pagination is usually safer than offset pagination because it pages from a specific position in a stable order, instead of from a moving row count.
Offset pagination is the “skip N, take M” way to page through a list. You tell the API how many items to skip (offset) and how many to return (limit). With limit=20, you get 20 items per page.
Conceptually:
GET /items?limit=20&offset=0 (first page)GET /items?limit=20&offset=20 (second page)GET /items?limit=20&offset=40 (third page)The response usually includes the items plus enough info to request the next page.
{
"items": [
{"id": 101, "title": "..."},
{"id": 100, "title": "..."}
],
"limit": 20,
"offset": 20,
"total": 523
}
It’s popular because it maps neatly to tables, admin lists, search results, and simple feeds. It’s also easy to implement with SQL using LIMIT and OFFSET.
The catch is the hidden assumption: the dataset stays still while the user is paging. In real apps, new rows are inserted, rows are deleted, and sort keys change. That’s where the “mystery bugs” start.
Offset pagination assumes the list stays still between requests. But real lists move. When the list shifts, an offset like “skip 20” no longer points to the same items.
Imagine a feed sorted by created_at desc (newest first), page size 3.
You load page 1 with offset=0, limit=3 and get [A, B, C].
Now a new item X is created and appears at the top. The list is now [X, A, B, C, D, E, F, ...]. You load page 2 with offset=3, limit=3. The server skips [X, A, B] and returns [C, D, E].
You just saw C again (a duplicate), and later you’ll miss an item because everything shifted down.
Deletes cause the opposite failure. Start with [A, B, C, D, E, F, ...]. You load page 1 and see [A, B, C]. Before page 2, B is deleted, so the list becomes [A, C, D, E, F, ...]. Page 2 with offset=3 skips [A, C, D] and returns [E, F, G]. D becomes a gap you never fetch.
In newest-first feeds, inserts happen at the top, which is exactly what shifts every later offset.
A “stable list” is what users expect: as they scroll forward, items don’t jump around, repeat, or vanish for no clear reason. It’s less about freezing time and more about making paging predictable.
Two ideas often get mixed together:
created_at with a tie-breaker like id) so two requests with the same inputs return the same order.Refresh and scroll-forward are different actions. Refresh means “show me what’s new right now,” so the top can change. Scroll-forward means “keep going from where I was,” so you should not see repeats or unexpected gaps caused by shifting page boundaries.
A simple rule that prevents most pagination bugs: scrolling forward should never show repeats.
Cursor pagination moves through a list using a bookmark instead of a page number. Rather than “give me page 3,” the client says “continue from here.”
The contract is straightforward:
This tolerates inserts and deletes better because the cursor anchors to a position in the sorted list, not to a row count.
The non-negotiable requirement is a deterministic sort order. You need a stable ordering rule and a consistent tie-breaker, otherwise the cursor isn’t a reliable bookmark.
Start by picking one sort order that matches how people read the list. Feeds, messages, and activity logs are usually newest first. Histories like invoices and audit logs are often easier oldest first.
A cursor must uniquely identify a position in that order. If two items can share the same cursor value, you will eventually get duplicates or gaps.
Common choices and what to watch for:
created_at only: simple, but unsafe if many rows share the same timestamp.id only: safe if IDs are monotonic, but it might not match the product order you want.created_at + id: usually the best mix (timestamp for product order, id as a tie-breaker).updated_at as the primary sort: risky for infinite scroll because edits can move items between pages.If you offer multiple sort options, treat each sort mode as a different list with its own cursor rules. A cursor only makes sense for one exact ordering.
You can keep the API surface small: two inputs, two outputs.
Send a limit (how many items you want) and an optional cursor (where to continue from). If the cursor is missing, the server returns the first page.
Example request:
GET /api/messages?limit=30&cursor=eyJjcmVhdGVkX2F0IjoiMjAyNi0wMS0xNlQxMDowMDowMFoiLCJpZCI6Ijk4NzYifQ==
Return the items and a next_cursor. If there is no next page, return next_cursor: null. Clients should treat the cursor as a token, not something to edit.
{
"items": [ {"id":"9876","created_at":"2026-01-16T10:00:00Z","subject":"..."} ],
"next_cursor": "...",
"has_more": true
}
Server-side logic in plain words: sort in a stable order, filter using the cursor, then apply the limit.
If you sort newest first by (created_at DESC, id DESC), decode the cursor into (created_at, id), then fetch rows where (created_at, id) is strictly less than the cursor pair, apply the same order, and take limit rows.
You can encode the cursor as a base64 JSON blob (easy) or as a signed/encrypted token (more work). Opaque is safer because it lets you change internals later without breaking clients.
Also set sane defaults: a reasonable mobile default (often 20-30), a web default (often 50), and a hard server max so one buggy client can’t request 10,000 rows.
A stable feed is mostly about one promise: once the user starts scrolling forward, the items they haven’t seen yet shouldn’t bounce around because someone else created, deleted, or edited records.
With cursor pagination, inserts are the easiest. New records should show up on refresh, not in the middle of already-loaded pages. If you order by created_at DESC, id DESC, new items naturally live before the first page, so your existing cursor continues into older items.
Deletes shouldn’t reshuffle the list. If an item is deleted, it simply won’t be returned when you would have fetched it. If you need to keep page sizes consistent, keep fetching until you collect limit visible items.
Edits are where teams accidentally reintroduce bugs. The key question is: can an edit change the sort position?
Snapshot-style behavior is usually best for scrolling lists: page by an immutable key like created_at. Edits can change the content, but the item doesn’t jump to a new position.
Live-feed behavior sorts by something like edited_at. That can cause jumps (an old item gets edited and moves near the top). If you choose this, treat the list as constantly changing and design the UX around refresh.
Don’t make the cursor depend on “find this exact row.” Encode the position instead, like {created_at, id} of the last returned item. Then the next query is based on values, not on row existence:
WHERE (created_at, id) < (:created_at, :id)id) to avoid duplicatesForward paging is the easy part. The trickier UX questions are backward paging, refresh, and random access.
For backward paging, two approaches tend to work:
next_cursor for older items and prev_cursor for newer items) while keeping one on-screen sort order.Random jumping is harder with cursors because “page 20” doesn’t have a stable meaning when the list changes. If you truly need jumping, jump to an anchor like “around this timestamp” or “starting from this message id,” not a page index.
On mobile, caching matters. Store cursors per list state (query + filters + sort), and treat each tab/view as its own list. That prevents “switch tabs and everything scrambles” behavior.
Most cursor pagination issues aren’t about the database. They come from small inconsistencies between requests that only show up under real traffic.
The biggest offenders:
created_at alone) so ties produce duplicates or missing items.next_cursor that doesn’t match the last item actually returned.If you build apps on platforms like Koder.ai, these edge cases show up fast because web and mobile clients often share the same endpoint. Having one explicit cursor contract and one deterministic ordering rule keeps both clients consistent.
Before calling pagination “done,” verify the behavior under inserts, deletes, and retries.
next_cursor is taken from the last returned rowlimit has a safe max and a documented defaultFor refresh, pick one clear rule: either users pull to refresh to fetch newer items at the top, or you periodically check “anything newer than my first item?” and show a “New items” button. Consistency is what makes the list feel stable instead of haunted.
Picture a support inbox that agents use on the web, while a manager checks the same inbox on mobile. The list is sorted by newest first. People expect one thing: when they scroll forward, items don’t jump around, repeat, or disappear.
With offset paging, an agent loads page 1 (items 1-20), then scrolls to page 2 (offset=20). While they’re reading, two new messages arrive at the top. Now offset=20 points to a different place than it did a second ago. The user sees duplicates or misses messages.
With cursor pagination, the app asks for “the next 20 items after this cursor,” where the cursor is based on the last item the user actually saw (commonly (created_at, id)). New messages can arrive all day, but the next page still starts right after the last message the user saw.
A simple way to test before shipping:
If you’re prototyping fast, Koder.ai can help you scaffold the endpoint and client flows from a chat prompt, then iterate safely using Planning Mode plus snapshots and rollback when a pagination change surprises you in testing.
Offset pagination points to “skip N rows,” so when new rows are inserted or old rows are deleted, the row count shifts. The same offset can suddenly refer to different items than it did a moment ago, which creates duplicates and gaps for users mid-scroll.
Cursor pagination uses a bookmark that represents “the position after the last item I saw.” The next request continues from that position in a deterministic order, so inserts at the top and deletes in the middle don’t move your page boundary the way offsets do.
Use a deterministic sort with a tie-breaker, most commonly (created_at, id) in the same direction. created_at gives the product-friendly order, and id makes each position unique so you don’t repeat or skip items when timestamps collide.
Sorting by updated_at can cause items to jump between pages when they’re edited, which breaks the “stable scroll forward” expectation. If you need a live “most recently updated” view, design the UI to refresh and accept reordering instead of promising a steady infinite scroll.
Return an opaque token as next_cursor and have the client send it back unchanged. A simple approach is encoding the last item’s (created_at, id) into a base64 JSON blob, but treating it as an opaque value is the important part so you can change internals later.
Build the next query from the cursor values, not from “find this exact row.” If the last item was deleted, the stored (created_at, id) still defines a position, so you can safely continue with a “strictly less than” (or “greater than”) filter in the same order.
Use a strict comparison and a unique tie-breaker, and always take the cursor from the last item you actually returned. Most repeat bugs come from using <= instead of <, omitting the tie-breaker, or generating next_cursor from the wrong row.
Pick one clear rule: refresh loads newer items at the top, while scroll-forward continues into older items from the existing cursor. Don’t mix “refresh semantics” into the same cursor flow, or users will see reordering and think the list is unreliable.
A cursor is only valid for one exact ordering and one set of filters. If the client changes sort mode, search query, or filters, it must start a new pagination session with no cursor and store cursors separately per list state.
Cursor pagination is great for sequential browsing but not for stable “page 20” jumps because the dataset can change. If you need jumping, jump to an anchor like “around this timestamp” or “starting after this id,” and then paginate with cursors from there.