UniLink uses cursor-based pagination so you can reliably fetch thousands of records across multiple requests without missing or duplicating data.
- List endpoints return a
metaobject withnext_cursor— pass it as thecursorquery parameter to get the next page. - Default page size is 20 items; set
limitup to 100 per request for faster bulk fetches. - When
next_cursorisnull, you have fetched all available records.
When you call a UniLink list endpoint — such as GET /contacts or GET /orders — the API does not return all records at once. Large data sets are split into pages to keep response times fast and prevent timeouts. UniLink uses cursor-based pagination rather than traditional page-number pagination, which means the results are stable and consistent even if new records are created while you are mid-way through fetching a large data set. Understanding how cursors work saves you from common bugs like duplicated or skipped records in bulk exports.
What Pagination Does
Cursor-based pagination works by encoding a position in the data set into an opaque string called a cursor. When you request the first page of a list endpoint, you do not pass a cursor — the API returns the first batch of results and, if more records exist, includes a next_cursor value in the meta object. On your next request, you pass that cursor value as the cursor query parameter. The API decodes the cursor, finds the exact position in the data set where the last page ended, and returns the next batch from that point forward.
The primary advantage of cursor pagination over offset pagination is stability. With offset pagination (?page=2&per_page=20), if a new record is inserted while you are fetching page 3, every subsequent record shifts down by one and you either miss or duplicate records at the boundary. Cursor pagination avoids this because the cursor encodes a stable position — usually based on a unique ID or timestamp — so new records never disrupt an in-progress fetch. This matters especially for export workflows that run over minutes or hours.
Every list response from the UniLink API includes a meta object at the top level alongside the data array. The meta object contains total (the total number of matching records), count (the number of records in the current page), next_cursor (the cursor for the next page, or null if this is the last page), and prev_cursor (the cursor for the previous page, or null if this is the first page). You can use total to show progress in a UI or estimate completion time in a long-running export script.
How to Get Started
- Make a basic list request without a cursor:
GET https://unilink.us/api/v1/contacts. Inspect the response and locate themetaobject — it containstotal,count,next_cursor, andprev_cursor. - Check whether
next_cursoris null. If it is, the first response contains all available records. If it contains a string value, there are more pages to fetch. - Make a second request passing the cursor:
GET https://unilink.us/api/v1/contacts?cursor=eyJpZCI6MTIzfQ. The API returns the next batch starting immediately after where the previous page ended. - Repeat until
next_cursoris null. This is the termination condition for your pagination loop. - Increase the page size with
?limit=100to reduce the number of round trips needed for large data sets. The maximum is 100 records per request.
How to Fetch All Pages in a Loop
- JavaScript example: Initialize
let cursor = nullandlet allRecords = []. In ado...whileloop, fetch/contacts?cursor=${cursor}&limit=100, appendresponse.datatoallRecords, and setcursor = response.meta.next_cursor. Continue whilecursor !== null. - Python example: Initialize
cursor = Noneandall_records = []. In awhile Trueloop, passparams={'cursor': cursor, 'limit': 100}. Appendresponse['data']toall_records. Setcursor = response['meta']['next_cursor']andbreakif cursor isNone. - Rate limit awareness: Check
X-RateLimit-Remainingafter each page request. If it approaches zero, pause untilX-RateLimit-Resetbefore fetching the next page. - Error handling: Wrap each page request in a try/catch. On a network error or 5xx response, retry with exponential backoff using the same cursor — do not advance to the next cursor until a page succeeds.
- Progress tracking: Use
meta.totalfrom the first response to calculate and log progress:fetched / total * 100. This is especially useful for long-running exports with tens of thousands of records.
Key Settings
| Setting | What It Does | Recommended |
|---|---|---|
| limit parameter | Number of records per page (default 20, max 100) | Use 100 for bulk exports; use 20 for UI-driven pagination |
| cursor parameter | Opaque string encoding the position in the data set | Pass exactly as received — do not modify, encode, or decode |
| meta.next_cursor | Cursor for the next page; null when on the last page | Use null check as the loop termination condition |
| meta.total | Total number of records matching the query | Use to show progress percentage in export UIs |
| Filtering + Cursor | Filters (e.g., ?tag=vip) persist across pages when combined with cursor | Always include the same filter parameters on every paginated request |
meta.next_cursor, including any special characters. Modifying a cursor produces a 400 Bad Request error.
Get the Most Out Of Pagination
Use limit=100 for all bulk export workflows. At the default of 20 records per page, fetching 10,000 contacts requires 500 requests. At limit=100 it requires only 100 — a 5x reduction in API calls and request time. On a Free plan with 100 requests per hour, that difference determines whether a full export completes in one hour or five.
Always include your filter parameters on every page request in a paginated loop. Filters like ?tag=newsletter&created_after=2025-01-01 must be repeated on every cursor-based request. The cursor encodes position within the filtered result set, not the filter itself. Omitting filters on page 2 and beyond returns a different, unfiltered result set and breaks your export.
Build resumable export pipelines by persisting the last-seen cursor to storage between runs. If a long-running export fails halfway through, you can resume from the saved cursor instead of starting over. Store the cursor in a database, file, or key-value store after each successful page, and read it at script startup to resume. This pattern turns a fragile, all-or-nothing export into a fault-tolerant, incremental sync.
Cursor values are not permanent — they expire after 24 hours. If your export is paused for more than 24 hours, the cursor becomes invalid and you will receive a 400 error when trying to resume. For exports that may span multiple days, design your data pipeline around filtering by created_at or updated_at timestamps rather than relying on a cursor that may expire before you resume.
Troubleshooting
| Problem | Cause | Fix |
|---|---|---|
| 400 Bad Request with "invalid cursor" | Cursor was modified, URL-encoded twice, or expired | Pass the cursor value exactly as returned in the response — check for accidental encoding or truncation |
| Duplicate records across pages | Filter parameters changed between page requests | Ensure all filter query parameters are identical on every page request in the loop |
| Loop never terminates | Code checks the wrong field — checking data length instead of next_cursor | Always terminate when meta.next_cursor === null — a page with 0 items in data is not the correct termination signal |
| meta.total seems inaccurate | Records created or deleted mid-export change the total | Use meta.total as an estimate, not an exact count — cursor pagination is consistent but the total reflects current state |
- Cursor pagination is stable — new records inserted mid-export do not cause duplicates or skips
- meta.total on the first response lets you estimate progress and expected request count
- Limit up to 100 records per page reduces request count significantly for large data sets
- Filters combine cleanly with cursors for consistent paginated queries on filtered data
- Cursors expire after 24 hours — exports that pause for too long cannot resume from where they left off
- Cannot jump to an arbitrary page number — must traverse sequentially from the beginning
- Filter parameters must be manually repeated on every request — easy to accidentally omit
How do I know when I have fetched all records?
When meta.next_cursor is null in the response, you have fetched the last page and there are no more records. Use this null check as the loop termination condition in your pagination code.
Can I go backwards through pages using prev_cursor?
Yes — meta.prev_cursor works the same way as next_cursor but traverses backwards. Pass it as the cursor parameter to retrieve the previous page. It is null on the first page.
Why does UniLink use cursor pagination instead of page numbers?
Cursor pagination is stable under concurrent writes. With page number pagination, a record inserted at position 15 while you are fetching page 2 shifts all subsequent records and causes page 3 to start at the wrong position. Cursors encode a stable position that does not shift when records are added or removed.
What is the maximum number of records I can fetch per request?
The maximum limit value is 100 records per request. For endpoints that support bulk operations, you can send larger payloads — but for GET list requests, 100 is the ceiling.
How long are cursors valid?
Cursors expire 24 hours after they are generated. If you pause a paginated export for more than 24 hours, restart from the beginning or use a timestamp-based filter to resume from your last successfully processed record.
- UniLink uses cursor-based pagination — pass
meta.next_cursoras thecursorquery parameter to fetch the next page. - Set
limit=100on bulk exports to reduce the number of API requests needed by up to 5x. - Loop terminates when
meta.next_cursoris null — that is the signal you have fetched all records. - Repeat all filter parameters on every page request — the cursor encodes position, not filters.
- Cursors expire after 24 hours — save your cursor to storage for resumable exports.
Start building with the UniLink API — generate your key at app.unilink.us under Settings → API and explore the full endpoint reference.
