UniLink API Pagination (How to Fetch Large Data Sets)

UniLink uses cursor-based pagination so you can reliably fetch thousands of records across multiple requests without missing or duplicating data.

  • List endpoints return a meta object with next_cursor — pass it as the cursor query parameter to get the next page.
  • Default page size is 20 items; set limit up to 100 per request for faster bulk fetches.
  • When next_cursor is null, you have fetched all available records.

When you call a UniLink list endpoint — such as GET /contacts or GET /orders — the API does not return all records at once. Large data sets are split into pages to keep response times fast and prevent timeouts. UniLink uses cursor-based pagination rather than traditional page-number pagination, which means the results are stable and consistent even if new records are created while you are mid-way through fetching a large data set. Understanding how cursors work saves you from common bugs like duplicated or skipped records in bulk exports.

What Pagination Does

Cursor-based pagination works by encoding a position in the data set into an opaque string called a cursor. When you request the first page of a list endpoint, you do not pass a cursor — the API returns the first batch of results and, if more records exist, includes a next_cursor value in the meta object. On your next request, you pass that cursor value as the cursor query parameter. The API decodes the cursor, finds the exact position in the data set where the last page ended, and returns the next batch from that point forward.

The primary advantage of cursor pagination over offset pagination is stability. With offset pagination (?page=2&per_page=20), if a new record is inserted while you are fetching page 3, every subsequent record shifts down by one and you either miss or duplicate records at the boundary. Cursor pagination avoids this because the cursor encodes a stable position — usually based on a unique ID or timestamp — so new records never disrupt an in-progress fetch. This matters especially for export workflows that run over minutes or hours.

Every list response from the UniLink API includes a meta object at the top level alongside the data array. The meta object contains total (the total number of matching records), count (the number of records in the current page), next_cursor (the cursor for the next page, or null if this is the last page), and prev_cursor (the cursor for the previous page, or null if this is the first page). You can use total to show progress in a UI or estimate completion time in a long-running export script.

How to Get Started

  1. Make a basic list request without a cursor: GET https://unilink.us/api/v1/contacts. Inspect the response and locate the meta object — it contains total, count, next_cursor, and prev_cursor.
  2. Check whether next_cursor is null. If it is, the first response contains all available records. If it contains a string value, there are more pages to fetch.
  3. Make a second request passing the cursor: GET https://unilink.us/api/v1/contacts?cursor=eyJpZCI6MTIzfQ. The API returns the next batch starting immediately after where the previous page ended.
  4. Repeat until next_cursor is null. This is the termination condition for your pagination loop.
  5. Increase the page size with ?limit=100 to reduce the number of round trips needed for large data sets. The maximum is 100 records per request.

How to Fetch All Pages in a Loop

  1. JavaScript example: Initialize let cursor = null and let allRecords = []. In a do...while loop, fetch /contacts?cursor=${cursor}&limit=100, append response.data to allRecords, and set cursor = response.meta.next_cursor. Continue while cursor !== null.
  2. Python example: Initialize cursor = None and all_records = []. In a while True loop, pass params={'cursor': cursor, 'limit': 100}. Append response['data'] to all_records. Set cursor = response['meta']['next_cursor'] and break if cursor is None.
  3. Rate limit awareness: Check X-RateLimit-Remaining after each page request. If it approaches zero, pause until X-RateLimit-Reset before fetching the next page.
  4. Error handling: Wrap each page request in a try/catch. On a network error or 5xx response, retry with exponential backoff using the same cursor — do not advance to the next cursor until a page succeeds.
  5. Progress tracking: Use meta.total from the first response to calculate and log progress: fetched / total * 100. This is especially useful for long-running exports with tens of thousands of records.

Key Settings

SettingWhat It DoesRecommended
limit parameterNumber of records per page (default 20, max 100)Use 100 for bulk exports; use 20 for UI-driven pagination
cursor parameterOpaque string encoding the position in the data setPass exactly as received — do not modify, encode, or decode
meta.next_cursorCursor for the next page; null when on the last pageUse null check as the loop termination condition
meta.totalTotal number of records matching the queryUse to show progress percentage in export UIs
Filtering + CursorFilters (e.g., ?tag=vip) persist across pages when combined with cursorAlways include the same filter parameters on every paginated request
Tip: Treat cursors as opaque strings — do not attempt to decode, modify, or construct them manually. Cursor values are base64-encoded and may include expiry information. Pass them exactly as returned in meta.next_cursor, including any special characters. Modifying a cursor produces a 400 Bad Request error.

Get the Most Out Of Pagination

Use limit=100 for all bulk export workflows. At the default of 20 records per page, fetching 10,000 contacts requires 500 requests. At limit=100 it requires only 100 — a 5x reduction in API calls and request time. On a Free plan with 100 requests per hour, that difference determines whether a full export completes in one hour or five.

Always include your filter parameters on every page request in a paginated loop. Filters like ?tag=newsletter&created_after=2025-01-01 must be repeated on every cursor-based request. The cursor encodes position within the filtered result set, not the filter itself. Omitting filters on page 2 and beyond returns a different, unfiltered result set and breaks your export.

Build resumable export pipelines by persisting the last-seen cursor to storage between runs. If a long-running export fails halfway through, you can resume from the saved cursor instead of starting over. Store the cursor in a database, file, or key-value store after each successful page, and read it at script startup to resume. This pattern turns a fragile, all-or-nothing export into a fault-tolerant, incremental sync.

Cursor values are not permanent — they expire after 24 hours. If your export is paused for more than 24 hours, the cursor becomes invalid and you will receive a 400 error when trying to resume. For exports that may span multiple days, design your data pipeline around filtering by created_at or updated_at timestamps rather than relying on a cursor that may expire before you resume.

Troubleshooting

ProblemCauseFix
400 Bad Request with "invalid cursor"Cursor was modified, URL-encoded twice, or expiredPass the cursor value exactly as returned in the response — check for accidental encoding or truncation
Duplicate records across pagesFilter parameters changed between page requestsEnsure all filter query parameters are identical on every page request in the loop
Loop never terminatesCode checks the wrong field — checking data length instead of next_cursorAlways terminate when meta.next_cursor === null — a page with 0 items in data is not the correct termination signal
meta.total seems inaccurateRecords created or deleted mid-export change the totalUse meta.total as an estimate, not an exact count — cursor pagination is consistent but the total reflects current state
  • Cursor pagination is stable — new records inserted mid-export do not cause duplicates or skips
  • meta.total on the first response lets you estimate progress and expected request count
  • Limit up to 100 records per page reduces request count significantly for large data sets
  • Filters combine cleanly with cursors for consistent paginated queries on filtered data
  • Cursors expire after 24 hours — exports that pause for too long cannot resume from where they left off
  • Cannot jump to an arbitrary page number — must traverse sequentially from the beginning
  • Filter parameters must be manually repeated on every request — easy to accidentally omit
How do I know when I have fetched all records?

When meta.next_cursor is null in the response, you have fetched the last page and there are no more records. Use this null check as the loop termination condition in your pagination code.

Can I go backwards through pages using prev_cursor?

Yes — meta.prev_cursor works the same way as next_cursor but traverses backwards. Pass it as the cursor parameter to retrieve the previous page. It is null on the first page.

Why does UniLink use cursor pagination instead of page numbers?

Cursor pagination is stable under concurrent writes. With page number pagination, a record inserted at position 15 while you are fetching page 2 shifts all subsequent records and causes page 3 to start at the wrong position. Cursors encode a stable position that does not shift when records are added or removed.

What is the maximum number of records I can fetch per request?

The maximum limit value is 100 records per request. For endpoints that support bulk operations, you can send larger payloads — but for GET list requests, 100 is the ceiling.

How long are cursors valid?

Cursors expire 24 hours after they are generated. If you pause a paginated export for more than 24 hours, restart from the beginning or use a timestamp-based filter to resume from your last successfully processed record.

  • UniLink uses cursor-based pagination — pass meta.next_cursor as the cursor query parameter to fetch the next page.
  • Set limit=100 on bulk exports to reduce the number of API requests needed by up to 5x.
  • Loop terminates when meta.next_cursor is null — that is the signal you have fetched all records.
  • Repeat all filter parameters on every page request — the cursor encodes position, not filters.
  • Cursors expire after 24 hours — save your cursor to storage for resumable exports.

Start building with the UniLink API — generate your key at app.unilink.us under Settings → API and explore the full endpoint reference.