Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions debugging/error-codes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

This reference documents PowerSync error codes organized by component, with troubleshooting suggestions for developers. Use the search bar to look up specific error codes (e.g., `PSYNC_R0001`).

# PSYNC_Rxxxx: Sync Rules issues

Check warning on line 9 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L9

Did you really mean 'PSYNC_Rxxxx'?

- **PSYNC_R0001**:
Catch-all [Sync Rules](/sync/rules/overview) parsing error, if no more specific error is available
Expand All @@ -23,7 +23,7 @@

## PSYNC_R24xx: SQL security warnings

# PSYNC_Sxxxx: Service issues

Check warning on line 26 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L26

Did you really mean 'PSYNC_Sxxxx'?

- **PSYNC_S0001**:
Internal assertion.
Expand Down Expand Up @@ -121,7 +121,7 @@
Create a publication using `WITH (publish = "insert, update, delete, truncate")` (the default).

- **PSYNC_S1143**:
Publication uses publish_via_partition_root.

Check warning on line 124 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L124

Did you really mean 'publish_via_partition_root'?

- **PSYNC_S1144**:
Invalid Postgres server configuration for replication and sync bucket storage.
Expand Down Expand Up @@ -200,7 +200,7 @@
The MongoDB Change Stream has been invalidated.

Possible causes:
- Some change stream documents do not have postImages.

Check warning on line 203 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L203

Did you really mean 'postImages'?
- startAfter/resumeToken is not valid anymore.
- The replication connection has changed.
- The database has been dropped.
Expand Down Expand Up @@ -264,15 +264,15 @@

Common causes:
1. **JWT signing key mismatch** (Supabase): The client is using tokens signed with a different key type (legacy vs. new JWT signing keys) than PowerSync expects. If you've migrated to new JWT signing keys, ensure users sign out and back in to get fresh tokens. See [Migrating from Legacy to New JWT Signing Keys](/installation/authentication-setup/supabase-auth#migrating-from-legacy-to-new-jwt-signing-keys).
2. **Missing or invalid key ID (kid)**: The token's kid header doesn't match any keys in PowerSync's keystore.

Check warning on line 267 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L267

Did you really mean 'keystore'?
3. **Incorrect JWT secret or JWKS endpoint**: Verify your authentication configuration matches your auth provider's settings.

- **PSYNC_S2102**:
Could not verify the auth token signature.

Typical causes include:
1. Token kid is not found in the keystore.

Check warning on line 274 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L274

Did you really mean 'keystore'?
2. Signature does not match the kid in the keystore.

Check warning on line 275 in debugging/error-codes.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/error-codes.mdx#L275

Did you really mean 'keystore'?

- **PSYNC_S2103**:
Token has expired. Check the expiry date on the token.
Expand Down Expand Up @@ -324,8 +324,8 @@

- **PSYNC_S2305**:
Too many buckets.
There is a limit on the number of buckets per active connection (default of 1,000). See [Limit on Number of Buckets Per Client](/sync/rules/organize-data-into-buckets#limit-on-number-of-buckets-per-client) and [Performance and Limits](/resources/performance-and-limits).

There is a limit on the number of buckets per active connection (default of 1,000). See [Too Many Buckets (Troubleshooting)](/debugging/troubleshooting#too-many-buckets-psync_s2305) for how to diagnose and resolve this, and [Performance and Limits](/resources/performance-and-limits) for the limit details.

## PSYNC_S23xx: Sync API errors - MongoDB Storage

Expand Down
198 changes: 198 additions & 0 deletions debugging/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,204 @@
});
```

### Too Many Buckets (`PSYNC_S2305`)

PowerSync uses internal partitions called [buckets](/architecture/powersync-service#bucket-system) to organize and sync data efficiently. There is a [default limit of 1,000 buckets](/resources/performance-and-limits) per user/client. When this limit is exceeded, you will see a `PSYNC_S2305` error in your PowerSync Service API logs.

**How buckets are created in Sync Streams**

The number of buckets a stream creates for a given user depends on how your query filters data. For subqueries and one-to-many JOINs, each row returned creates a bucket. For many-to-many JOINs, each row of the primary table (the one in `SELECT`) creates a bucket instead. The 1,000 limit applies to the total amount of buckets across all active streams for a single user.

Examples below use a common schema: **regions** → **orgs** → **projects** → **tasks**, with **org_membership** (user_id, org_id) linking users to orgs. For many-to-many, **assets** ↔ **projects** via **project_assets** (asset_id, project_id).

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'orgs'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'org_membership'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'user_id'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'org_id'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'orgs'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'project_assets'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'asset_id'?

Check warning on line 64 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L64

Did you really mean 'project_id'?

| Query pattern | Buckets per user |
|---|---|
| No parameters: `SELECT * FROM regions` | 1 global bucket, shared by all users |
| Direct auth filter only: `WHERE user_id = auth.user_id()` | 1 per user |
| Subscription parameter: `WHERE project_id = subscription.parameter('project_id')` | 1 per unique parameter value the client subscribes with |
| Subquery returning N rows: `WHERE id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())` | N — one per result row of the subquery |

Check warning on line 71 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L71

Did you really mean 'Subquery'?

Check warning on line 71 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L71

Did you really mean 'subquery'?
| INNER JOIN through an intermediate table: `SELECT tasks.* FROM tasks JOIN projects ON tasks.project_id = projects.id WHERE projects.org_id IN (...)` | N — one per row of the joined table (one per project) |
| Many-to-many JOIN: `SELECT assets.* FROM assets JOIN project_assets ON project_assets.asset_id = assets.id WHERE project_assets.project_id IN (...)` | N — one per primary table row (one per asset) |

The **subquery** and **one-to-many JOIN** cases follow the same principle: when a query filters through an intermediate table — whether via a subquery or a JOIN — each row of that intermediate table creates a separate bucket. The subquery returns `org_id`s, so you get one bucket per org; the tasks-projects JOIN yields one bucket per project.

Check warning on line 75 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L75

Did you really mean 'subquery'?

Check warning on line 75 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L75

Did you really mean 'subquery'?

Check warning on line 75 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L75

Did you really mean 'subquery'?

**Many-to-many JOINs are different** than one-to-many JOINs. With a many-to-many relationship (e.g., assets ↔ projects via a join table like `project_assets`), the join table does *not* define the bucket space. PowerSync processes each primary-table row independently and cannot group by the join table's keys. So `SELECT assets.* FROM assets INNER JOIN project_assets ...` creates one bucket per asset row, even if you intended to partition by project. The JOIN controls which rows sync, not how they are grouped.

**Hierarchical or chained queries** are another source of bucket growth. When multiple queries in one stream depend on each other (e.g., query B filters by IDs from query A, query C filters by IDs from query B), each level creates buckets. For example, consider:

```yaml
streams:
org_projects_tasks:
auto_subscribe: true
with:
user_orgs: SELECT org_id FROM org_membership WHERE user_id = auth.user_id()
user_projects: SELECT id FROM projects WHERE org_id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())
queries:
- SELECT * FROM orgs WHERE id IN user_orgs
- SELECT * FROM projects WHERE id IN user_projects
- SELECT * FROM tasks WHERE project_id IN user_projects
```

A stream that fetches orgs by user membership, then projects by `org_id`, then tasks by `project_id` creates:

Check warning on line 94 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L94

Did you really mean 'orgs'?

- One bucket per org
- One bucket per project
- One bucket per project for tasks (filtered by `project_id`)

A user with 10 orgs and 50 projects per org therefore generates 10 + 500 + 500 = 1,010 buckets — over the limit. Trying to reduce buckets by filtering all three tables with `org_id` only works if your schema has `org_id` on every table. If tasks only have `project_id` (and projects have `org_id`), add `org_id` to the tasks table so the flattened approach can work.

Check warning on line 100 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L100

Did you really mean 'orgs'?

**Diagnosing which streams are contributing**

- The `PSYNC_S2305` error log includes a breakdown showing which stream definitions are contributing the most bucket instances (top 10 by count).
- PowerSync Service checkpoint logs record the total parameter result count per connection. You can find these in your [instance logs](/maintenance-ops/monitoring-and-alerting). For example:

```
New checkpoint: 800178 | write: null | buckets: 7 | param_results: 6 ["5#user_data|0[\"ef718ff3...\"]","5#user_data|1[\"1ddeddba...\"]","5#user_data|1[\"2ece823f...\"]", ...]
```
- `buckets` — total number of active buckets for this connection
- `param_results` — the total parameter result count across all stream definitions for this connection
- The array lists the active bucket names and the value in `[...]` is the evaluated parameter for that bucket

- The [Sync Diagnostics Client](/tools/diagnostics-client) lets you inspect the buckets for a specific user, but note that it will not load for users who have exceeded the bucket limit since their sync connection fails before data can be retrieved. Use the instance logs and error breakdown to diagnose those cases.

**Reducing bucket count in Sync Streams**

1. **Consolidate streams using [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream)**: Using `queries` instead of `query` groups related tables into a single stream. All queries in that stream share one bucket per unique evaluated parameter value.

**Before** — 5 separate streams, each with direct `auth.user_id()` filter → 5 buckets per user:

```yaml
streams:
user_settings:
query: SELECT * FROM settings WHERE user_id = auth.user_id()
user_prefs:
query: SELECT * FROM preferences WHERE user_id = auth.user_id()
user_org_list:
query: SELECT * FROM org_membership WHERE user_id = auth.user_id()
user_region:
query: SELECT * FROM region_members WHERE user_id = auth.user_id()
user_profile:
query: SELECT * FROM profiles WHERE user_id = auth.user_id()
```

**After** — 1 stream with 5 queries → 1 bucket per user:

```yaml
streams:
user_data:
queries:
- SELECT * FROM settings WHERE user_id = auth.user_id()
- SELECT * FROM preferences WHERE user_id = auth.user_id()
- SELECT * FROM org_membership WHERE user_id = auth.user_id()
- SELECT * FROM region_members WHERE user_id = auth.user_id()
- SELECT * FROM profiles WHERE user_id = auth.user_id()
```

2. **Query the membership table directly instead of through it**: When a subquery or JOIN through a membership table is causing N buckets, flip the query to target the membership table directly with a direct auth filter — no subquery, no JOIN. You will typically need fields from the related table (e.g., org name, address) alongside each membership row; denormalize those fields onto the membership table so everything is available without introducing a JOIN.

Check warning on line 149 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L149

Did you really mean 'subquery'?

Check warning on line 149 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L149

Did you really mean 'subquery'?

**Before** — N org memberships → N buckets:

```yaml
streams:
org_data:
query: SELECT * FROM orgs WHERE id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())
```

**After** — 1 bucket per user (with org fields denormalized onto `org_membership`):

Check warning on line 159 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L159

Did you really mean 'denormalized'?

```yaml
streams:
my_org_memberships:
query: SELECT * FROM org_membership WHERE user_id = auth.user_id()
```

3. **Denormalize for hierarchical data**: When chained queries through parent-child relationships (e.g., org → project → task) create too many buckets, filter all tables with the same top-level parameter (e.g., `org_id`). This only works if child tables have that column — add `org_id` to tasks if they only have `project_id`.

**Before** — org_projects_tasks with 3 chained queries → 10 + 500 + 500 = 1,010 buckets for 10 orgs, 50 projects each:

Check warning on line 169 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L169

Did you really mean 'org_projects_tasks'?

Check warning on line 169 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L169

Did you really mean 'orgs'?

```yaml
streams:
org_projects_tasks:
with:
user_orgs: SELECT org_id FROM org_membership WHERE user_id = auth.user_id()
user_projects: SELECT id FROM projects WHERE org_id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())
queries:
- SELECT * FROM orgs WHERE id IN user_orgs
- SELECT * FROM projects WHERE id IN user_projects
- SELECT * FROM tasks WHERE project_id IN user_projects
```

**After** — Add `org_id` to tasks, flatten to one bucket per org → 10 buckets:

```yaml
streams:
org_projects_tasks:
with:
user_orgs: SELECT org_id FROM org_membership WHERE user_id = auth.user_id()
queries:
- SELECT * FROM orgs WHERE id IN user_orgs
- SELECT * FROM projects WHERE org_id IN user_orgs
- SELECT * FROM tasks WHERE org_id IN user_orgs
```

4. **Many-to-many via denormalization**: For assets ↔ projects via `project_assets`, buckets follow the primary table — one per asset. Add a denormalized `project_ids` JSON array on `assets` and use `json_each()` to partition by project.

Check warning on line 196 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L196

Did you really mean 'denormalized'?

**Before** — One bucket per asset (e.g., 2,000 assets → 2,000 buckets):

```yaml
streams:
assets_in_projects:
with:
user_projects: SELECT id FROM projects WHERE org_id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())
query: |
SELECT assets.* FROM assets
JOIN project_assets ON project_assets.asset_id = assets.id
WHERE project_assets.project_id IN user_projects
```

**After** — Add `project_ids` to `assets` (via triggers), partition by project → 50 buckets for 50 projects:

```yaml
streams:
assets_in_projects:
with:
user_orgs: SELECT org_id FROM org_membership WHERE user_id = auth.user_id()
user_projects: SELECT id FROM projects WHERE org_id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())
query: |
SELECT assets.* FROM assets
INNER JOIN json_each(assets.project_ids) AS p
INNER JOIN user_projects ON p.value = user_projects.id
WHERE assets.org_id IN user_orgs
```

Alternatively, use two queries: one for `project_assets` filtered by project, one for `assets`; the client joins locally. The trade-off: the assets query may sync more rows than needed unless you can filter it further.

5. **Restructure to use subscription parameters**: Buckets are only created per active client subscription, not from all possible values. Use `subscription.parameter('project_id')` so the count is bounded by how many subscriptions the client has active.

**Before** — Subquery returns all user projects → 50 buckets for 50 projects:

Check warning on line 230 in debugging/troubleshooting.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

debugging/troubleshooting.mdx#L230

Did you really mean 'Subquery'?

```yaml
streams:
project_tasks:
with:
user_projects: SELECT id FROM projects WHERE org_id IN (SELECT org_id FROM org_membership WHERE user_id = auth.user_id())
query: SELECT * FROM tasks WHERE project_id IN user_projects
```

**After** — Client subscribes per project on demand → 1 bucket per active subscription (e.g., 3 projects open = 3 buckets):

```yaml
streams:
project_tasks:
query: SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id')
```

This requires client code to subscribe when the user opens a project and unsubscribe when they leave. Only practical when users don't need all related records available simultaneously.

**Increasing the limit**

The default of 1,000 can be increased upon request for [Team and Enterprise](https://www.powersync.com/pricing) customers. Note that performance degrades as bucket count increases beyond 1,000. See [Performance and Limits](/resources/performance-and-limits).

## Tools

Troubleshooting techniques depend on the type of issue:
Expand Down
2 changes: 1 addition & 1 deletion sync/streams/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -172,8 +172,8 @@
<Tabs>
<Tab title="JavaScript/TypeScript">
```js
const sub = await db.syncStream('list_todos', { list_id: 'abc123' })

Check warning on line 175 in sync/streams/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

sync/streams/overview.mdx#L175

Did you really mean 'list_id'?
.subscribe({ ttl: 3600 });

Check warning on line 176 in sync/streams/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

sync/streams/overview.mdx#L176

Did you really mean 'ttl'?

// Wait for this subscription to have synced
await sub.waitForFirstSync();
Expand Down Expand Up @@ -232,7 +232,7 @@

<Tab title="Swift">
```swift
let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")])

Check warning on line 235 in sync/streams/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

sync/streams/overview.mdx#L235

Did you really mean 'list_todos'?

Check warning on line 235 in sync/streams/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

sync/streams/overview.mdx#L235

Did you really mean 'list_id'?
.subscribe(ttl: 60 * 60, priority: nil) // 1 hour

// Wait for this subscription to have synced
Expand Down Expand Up @@ -276,7 +276,7 @@

- **Case Sensitivity**: To avoid issues across different databases and platforms, use **lowercase identifiers** for all table and column names in your Sync Streams. If your backend uses mixed case, see [Case Sensitivity](/sync/advanced/case-sensitivity) for how to handle it.

- **Bucket Limits**: PowerSync uses internal partitions called [buckets](/architecture/powersync-service#bucket-system) to efficiently sync data. There's a default [limit of 1,000 buckets](/resources/performance-and-limits) per user/client. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. You can use [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count.
- **Bucket Limits**: PowerSync uses internal partitions called [buckets](/architecture/powersync-service#bucket-system) to efficiently sync data. There's a default [limit of 1,000 buckets](/resources/performance-and-limits) per user/client. Each unique result returned by a stream's query creates one bucket instance — so a stream that filters through an intermediate table via a subquery or JOIN (e.g. N org memberships) creates N buckets for that user. You can use [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count. See [Too Many Buckets](/debugging/troubleshooting#too-many-buckets-psync_s2305) in the troubleshooting guide for how to diagnose and resolve `PSYNC_S2305` errors.

Check warning on line 279 in sync/streams/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (powersync) - vale-spellcheck

sync/streams/overview.mdx#L279

Did you really mean 'subquery'?

- **Troubleshooting**: If data isn't syncing as expected, the [Sync Diagnostics Client](/tools/diagnostics-client) helps you inspect what's happening for a specific user — you can see which buckets the user has and what data is being synced.

Expand Down