diff --git a/.changeset/cloudflare-do-sqlite-persistence.md b/.changeset/cloudflare-do-sqlite-persistence.md new file mode 100644 index 000000000..8d63bf9e7 --- /dev/null +++ b/.changeset/cloudflare-do-sqlite-persistence.md @@ -0,0 +1,5 @@ +--- +'@tanstack/db-cloudflare-do-sqlite-persisted-collection': patch +--- + +feat(persistence): add SQLite persistence support for Cloudflare Durable Objects runtime diff --git a/.changeset/paid-gems-sell.md b/.changeset/paid-gems-sell.md deleted file mode 100644 index d93b714a6..000000000 --- a/.changeset/paid-gems-sell.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -'@tanstack/db-electron-sqlite-persisted-collection': patch -'@tanstack/db-node-sqlite-persisted-collection': patch ---- - -feat(persistence): add Electron and Node.js SQLite persisted collection packages - -**Electron (`@tanstack/db-electron-sqlite-persisted-collection`)** - -- New package for Electron persistence via better-sqlite3 -- IPC bridge for secure main-process SQLite access from renderer -- `ElectronCollectionCoordinator` for coordinating persistence across Electron windows - -**Node.js (`@tanstack/db-node-sqlite-persisted-collection`)** - -- New package for Node.js persistence via the built-in `node:sqlite` module -- Lightweight driver and persistence layer for server-side and CLI use cases diff --git a/.github/workflows/e2e-tests.yml b/.github/workflows/e2e-tests.yml index 3b26c63a0..abb11c048 100644 --- a/.github/workflows/e2e-tests.yml +++ b/.github/workflows/e2e-tests.yml @@ -72,6 +72,11 @@ jobs: cd packages/db-electron-sqlite-persisted-collection TANSTACK_DB_ELECTRON_E2E_ALL=1 pnpm test:e2e + - name: Run Cloudflare Durable Object persisted collection E2E tests + run: | + cd packages/db-cloudflare-do-sqlite-persisted-collection + pnpm test:e2e + - name: Run React Native/Expo persisted collection E2E tests run: | cd packages/db-react-native-sqlite-persisted-collection diff --git a/PERSISTNCE-PLAN-SQLITE-ONLY.md b/PERSISTNCE-PLAN-SQLITE-ONLY.md new file mode 100644 index 000000000..e2c264d74 --- /dev/null +++ b/PERSISTNCE-PLAN-SQLITE-ONLY.md @@ -0,0 +1,1424 @@ +# Persisted Collections + Multi-Tab Query-Driven Sync (SQLite-Only) + +## Summary + +This plan standardizes persistence on SQLite across runtimes and removes raw IndexedDB as a first-class persistence adapter. + +In the browser, persistence is OPFS-only via `wa-sqlite` + `OPFSCoopSyncVFS`, with no SharedWorker requirement. Multi-tab coordination uses Web Locks, Visibility API, and BroadcastChannel. + +Leadership is **per collection** (per table), not global per database. + +`persistedCollectionOptions(...)` infers behavior from the wrapped options: + +1. if wrapped options include `sync`, persistence augments that sync path +2. if wrapped options do not include `sync`, persistence runs sync-absent with SQLite as source of truth + +## Background + +TanStack DB on-demand sync uses `loadSubset(options)` as the choke point for query-driven loading and pagination. Persistence should plug into this same mechanism so: + +- any tab can load from local persistence immediately +- leader tabs handle remote coverage checks when sync is enabled +- tabs receive ordered updates and remain coherent +- persisted indexes mirror collection index creation in user space + +## Locked Decisions + +1. SQLite-only persistence architecture. +2. Browser storage is OPFS-only (`wa-sqlite` + `OPFSCoopSyncVFS`). +3. No SharedWorker requirement in the browser architecture. +4. Leadership is collection-scoped: single writer per collection/table. +5. `persistedCollectionOptions(...)` infers sync-present vs sync-absent behavior from presence of `sync`. +6. Cloudflare Durable Objects SQLite is a supported runtime target. +7. Delete tracking uses per-key tombstone state (one row per deleted key) with monotonic `row_version`. + +## Goals + +1. Local-first `loadSubset` in every runtime. +2. Correct multi-tab behavior with collection-scoped leadership. +3. Fast local reads from SQLite in every tab. +4. Reliable replay ordering via `(term, seq)`. +5. Persisted index parity with TanStack DB index lifecycle. +6. Sync-absent persisted collections with automatic mutation persistence. +7. Runtime coverage for browser, node, RN, Expo, Electron, and Cloudflare Durable Objects. + +## Non-Goals + +1. Raw IndexedDB persistence adapter. +2. SharedWorker-based mandatory architecture. +3. Full SQL pushdown for arbitrary unsupported expressions in v1. +4. Global single-writer guarantee for all tables in one DB file. +5. Perfect index GC/eviction policy in v1. + +## Runtime Scope + +| Runtime | Engine | Notes | +| -------------------------- | -------------------------------- | ------------------------------------------------- | +| Browser | `wa-sqlite` | OPFS + `OPFSCoopSyncVFS`, leader per collection | +| Node | `better-sqlite3` | Reference runtime + CI contract tests | +| React Native | `op-sqlite` | Thin driver over shared core | +| Expo | `op-sqlite` | Thin driver over shared core | +| Electron | `better-sqlite3` in main process | Renderer via IPC | +| Cloudflare Durable Objects | SQLite-backed DO storage | DB executes in-process inside DO; no tab election | + +## High-Level Design + +### 1) `persistedCollectionOptions(...)` Infers Behavior from Wrapped Options + +#### A) `sync` Present in Wrapped Options + +Wrap an existing sync collection and add local SQLite persistence. + +```ts +const tasks = createCollection( + persistedCollectionOptions({ + ...queryCollectionOptions({ + /* existing remote sync */ + }), + persistence: { + adapter: BrowserWASQLiteStorage({ dbName: 'app' }), + coordinator: BrowserCollectionCoordinator({ dbName: 'app' }), + }, + }), +) +``` + +#### B) No `sync` in Wrapped Options + +No wrapped remote sync is required. SQLite persistence is source of truth. + +```ts +const drafts = createCollection( + persistedCollectionOptions({ + id: 'drafts', + getKey: (row) => row.id, + persistence: { + adapter: BrowserWASQLiteStorage({ dbName: 'app' }), + coordinator: BrowserCollectionCoordinator({ dbName: 'app' }), + }, + }), +) +``` + +When `sync` is absent, mutations are automatically persisted (like `localStorageCollectionOptions`) and do not require remote sync. + +### 1.1) TypeScript API Sketch (Inferred Overloads) + +The API should use overloads so mode is inferred at compile-time from whether `sync` exists on wrapped options. + +```ts +type PersistedCollectionPersistence< + T extends object, + TKey extends string | number, +> = { + adapter: PersistenceAdapter + coordinator?: PersistedCollectionCoordinator +} + +type PersistedSyncWrappedOptions< + T extends object, + TKey extends string | number, + TSchema extends StandardSchemaV1 = never, + TUtils extends UtilsRecord = UtilsRecord, +> = CollectionConfig & { + sync: SyncConfig + persistence: PersistedCollectionPersistence +} + +type PersistedLocalOnlyOptions< + T extends object, + TKey extends string | number, + TSchema extends StandardSchemaV1 = never, + TUtils extends UtilsRecord = UtilsRecord, +> = Omit, 'sync'> & { + persistence: PersistedCollectionPersistence +} + +export function persistedCollectionOptions< + T extends object, + TKey extends string | number, + TSchema extends StandardSchemaV1 = never, + TUtils extends UtilsRecord = UtilsRecord, +>( + options: PersistedSyncWrappedOptions, +): CollectionConfig + +export function persistedCollectionOptions< + T extends object, + TKey extends string | number, + TSchema extends StandardSchemaV1 = never, + TUtils extends UtilsRecord = UtilsRecord, +>( + options: PersistedLocalOnlyOptions, +): CollectionConfig +``` + +Runtime rule: + +- `if (options.sync != null)` => sync-present path +- else => sync-absent path +- if `persistence.coordinator` is omitted, use `SingleProcessCoordinator` (intended for DO/node single-process execution) + +Inference edge-case rules (fixed): + +- sync-present requires `options.sync` with callable `sync` function. +- `sync` key present but invalid (`null`, non-object, missing `sync` function) throws `InvalidSyncConfigError`. +- sync-absent path is selected only when `sync` key is not present. +- user-provided `onInsert/onUpdate/onDelete` remain supported in both paths; sync-absent wrappers compose and then persist. + +`PersistedCollectionUtils` should include: + +- `acceptMutations(transaction)` for manual transactions +- optional debug helpers (`getLeadershipState`, `forceReloadSubset`) for tests/devtools + +### 2) Index Lifecycle Mirrors Main-Thread Indexing + +Persisted indexes are created from the same collection index lifecycle as user-space query indexes: + +- manual `collection.createIndex(...)` +- auto indexing (`autoIndex`) + +Required events in `@tanstack/db`: + +- `index:added` +- `index:removed` + +Persistence listens and ensures/removes matching persisted indexes. + +### 3) Storage Backend Options (All SQLite) + +Every backend uses the same logical persistence model (table-per-collection + JSON payloads + expression indexes), but runtime wiring differs. + +#### Browser: `wa-sqlite` + `OPFSCoopSyncVFS` + +- storage: OPFS-only +- coordinator: `BrowserCollectionCoordinator` (Web Locks + Visibility + BroadcastChannel) +- leadership: collection-scoped in-tab election + +Browser capability baseline: + +- Phase 7 (single-tab) requires OPFS with `FileSystemSyncAccessHandle`. +- Phase 8 (multi-tab coordinator) additionally requires Web Locks. +- Target support is evergreen browsers from roughly the last 3 years that satisfy those capabilities. + +##### Browser Coordination (No SharedWorker) + +Election and preference: + +- Web Locks key per collection: + - `tsdb:leader::` +- Web Locks key for SQLite write serialization: + - `tsdb:writer:` +- Visibility API is a preference hint: + - visible tabs should be preferred leaders + - hidden leaders can step down cooperatively + +Visibility handoff protocol: + +- a leader entering hidden state starts `HIDDEN_STEPDOWN_DELAY_MS` (default 5000ms) +- while hidden, it listens for `leader:candidate` announcements from visible tabs +- if a visible contender is observed and delay elapses, current leader releases collection lock +- after handoff, apply `LEADER_HANDOFF_COOLDOWN_MS` (default 3000ms) before trying to re-acquire to prevent thrash + +Messaging: + +- BroadcastChannel namespace per collection: + - `tx` messages with `(term, seq)` and commit metadata + - `rpc` messages for `ensureRemoteSubset`, `ensurePersistedIndex`, `applyLocalMutations` + - `leader` heartbeat/announcement + +Ordering and recovery: + +- each collection stream has ordered `(term, seq)` +- followers track latest `(term, seq)` seen +- followers: + - ignore old terms + - ignore duplicate seq + - trigger catch-up on seq gap via `rpc:pullSince` + - if catch-up fails, fallback to stale-mark + subset reload + +Leadership lifecycle algorithm: + +1. Tab starts collection interest: + +- subscribe to collection channel +- attempt Web Lock acquisition for `tsdb:leader::` + +2. If lock acquired: + +- increment and persist `term` in SQLite metadata (transactional) +- become leader for that collection +- start heartbeat timer + +3. If tab becomes hidden and another visible contender exists: + +- leader may step down cooperatively and release lock + +4. On lock loss or unload: + +- stop sync tasks for that collection +- stop heartbeat +- continue follower read path + +5. Followers watch heartbeat timeout: + +- on timeout, attempt lock acquisition and leadership takeover + +`term` monotonicity requirement: + +- `term` must survive reload/restart and never decrement for a collection. +- leaders read+increment `leader_term` inside a SQLite transaction before emitting heartbeat. + +#### Node + +- storage: local sqlite via `better-sqlite3` +- coordinator: `SingleProcessCoordinator` by default +- common use: tests, server-side execution, tooling + +#### React Native / Expo + +- storage: `op-sqlite` wrappers for RN and Expo +- coordinator: typically `SingleProcessCoordinator` (single process), can be overridden if host adds cross-process sync +- packaging: one shared mobile package with RN/Expo-specific entrypoints only where needed + +#### Electron + +- storage: sqlite in main process +- coordinator: `SingleProcessCoordinator` in main process +- renderer interaction: via IPC bridge only +- packaging: separate electron package that wraps node adapter semantics with IPC transport + +#### Cloudflare Durable Objects (In-Process) + +Cloudflare Durable Objects run as single-threaded stateful actors with attached SQLite-backed storage. For a DO instance: + +- no browser-style leader election is needed +- the DO instance is authoritative writer for its storage +- `loadSubset` and mutation persistence execute directly in-object +- optional upstream sync can still be layered if needed, but sync-absent local persistence is a natural default +- this is an in-runtime execution model (DB + persistence in the same DO process), not a remote persistence adapter pattern + +Example shape inside a DO: + +```ts +export class AppDurableObject extends DurableObject { + private tasks = createCollection( + persistedCollectionOptions({ + id: 'tasks', + getKey: (row) => row.id, + persistence: { + adapter: durableObjectSQLiteAdapter(this.ctx.storage.sql), + // coordinator omitted -> SingleProcessCoordinator + }, + }), + ) +} +``` + +### 4) Collection-Scoped Coordinator + +Coordinator responsibilities per collection: + +- election: one leader per `collectionId` +- ordered broadcast of committed tx (`term`, `seq`) +- RPC: + - `ensureRemoteSubset(collectionId, options)` when `sync` is present + - `ensurePersistedIndex(collectionId, signature, spec)` + - `applyLocalMutations(collectionId, mutations)` when `sync` is absent and follower is not leader + +Tabs do not proxy reads through leaders; each tab reads SQLite directly. + +Runtime note: + +- browser uses `BrowserCollectionCoordinator` (election + BroadcastChannel RPC) +- DO/node single-process execution uses `SingleProcessCoordinator` (no election, no cross-tab RPC) + +Coordinator contract (minimum surface): + +```ts +interface PersistedCollectionCoordinator { + getNodeId(): string + subscribe( + collectionId: string, + onMessage: (message: ProtocolEnvelope) => void, + ): () => void + publish(collectionId: string, message: ProtocolEnvelope): void + isLeader(collectionId: string): boolean + ensureLeadership(collectionId: string): Promise + requestEnsureRemoteSubset?( + collectionId: string, + options: LoadSubsetOptions, + ): Promise + requestEnsurePersistedIndex( + collectionId: string, + signature: string, + spec: PersistedIndexSpec, + ): Promise + requestApplyLocalMutations?( + collectionId: string, + mutations: Array, + ): Promise + pullSince?( + collectionId: string, + fromRowVersion: number, + ): Promise +} +``` + +Coordinator validation rule: + +- wrapper validates required coordinator methods at initialization based on runtime mode. +- browser multi-tab mode requires `requestEnsureRemoteSubset`, `requestApplyLocalMutations`, and `pullSince`. +- single-process coordinators (node/electron/do and browser single-tab) may omit cross-tab RPC helpers. + +### 4.1) Coordinator Protocol (Implementation Draft) + +Message envelope: + +```ts +type ProtocolEnvelope = { + v: 1 + dbName: string + collectionId: string + senderId: string + ts: number + payload: TPayload +} +``` + +Message payloads: + +```ts +type LeaderHeartbeat = { + type: 'leader:heartbeat' + term: number + leaderId: string + latestSeq: number + latestRowVersion: number +} + +type TxCommitted = { + type: 'tx:committed' + term: number + seq: number + txId: string + latestRowVersion: number +} & ( + | { + requiresFullReload: true + } + | { + requiresFullReload: false + changedKeys: Array + deletedKeys: Array + } +) + +type EnsureRemoteSubsetRequest = { + type: 'rpc:ensureRemoteSubset:req' + rpcId: string + options: LoadSubsetOptions +} + +type EnsureRemoteSubsetResponse = + | { + type: 'rpc:ensureRemoteSubset:res' + rpcId: string + ok: true + } + | { + type: 'rpc:ensureRemoteSubset:res' + rpcId: string + ok: false + error: string + } + +type ApplyLocalMutationsRequest = { + type: 'rpc:applyLocalMutations:req' + rpcId: string + envelopeId: string + mutations: Array +} + +type ApplyLocalMutationsResponse = + | { + type: 'rpc:applyLocalMutations:res' + rpcId: string + ok: true + term: number + seq: number + latestRowVersion: number + acceptedMutationIds: Array + } + | { + type: 'rpc:applyLocalMutations:res' + rpcId: string + ok: false + code: 'NOT_LEADER' | 'VALIDATION_ERROR' | 'CONFLICT' | 'TIMEOUT' + error: string + } + +type PullSinceRequest = { + type: 'rpc:pullSince:req' + rpcId: string + fromRowVersion: number +} + +type PullSinceResponse = + | { + type: 'rpc:pullSince:res' + rpcId: string + ok: true + latestTerm: number + latestSeq: number + latestRowVersion: number + requiresFullReload: true + } + | { + type: 'rpc:pullSince:res' + rpcId: string + ok: true + latestTerm: number + latestSeq: number + latestRowVersion: number + requiresFullReload: false + changedKeys: Array + deletedKeys: Array + } + | { + type: 'rpc:pullSince:res' + rpcId: string + ok: false + error: string + } + +type CollectionReset = { + type: 'collection:reset' + schemaVersion: number + resetEpoch: number +} +``` + +Idempotency rules: + +- `tx:committed` idempotency key: `(collectionId, term, seq)` +- local mutation idempotency key: `envelopeId` +- mutation acknowledgment/correlation key: `mutationId` (per mutation inside an envelope) +- RPC response correlation key: `rpcId` +- `applyLocalMutations` is at-least-once delivery; leader must dedupe by `envelopeId` +- catch-up cursor key: `latestRowVersion` (monotonic per collection) +- followers persist `lastSeenRowVersion` from applied `tx:committed` messages and successful `pullSince` responses + +Recommended browser defaults: + +- heartbeat interval: 2000ms +- leader timeout: 6000ms +- RPC timeout: 5000ms +- local mutation retry backoff: 100ms → 2000ms capped exponential +- all timing knobs should be configurable per collection (advanced option) + +## Key Mechanics + +### A) Writer Ownership + +- logical single writer per collection/table at a time +- different tabs can lead different collections simultaneously +- followers do not write that collection directly in browser mode +- follower writes are routed to current leader for serialization + +SQLite write-lock note: + +- SQLite still permits one write transaction at a time per database file. +- collection leaders therefore coordinate through `tsdb:writer:` before write transactions. +- this keeps per-collection leadership for ownership, while serializing physical DB writes to avoid `SQLITE_BUSY` thrash. + +### A.1) Commit + Broadcast Ordering + +Leader commit pipeline for a collection change: + +1. acquire DB writer lock (`tsdb:writer:`) +2. begin SQLite transaction +3. increment collection `latest_row_version` and stamp touched rows with that version +4. apply row and index changes +5. for deletes, insert/update tombstone records with same `row_version` +6. insert idempotency marker in `applied_tx(collection_id, term, seq, applied_at)` +7. read updated `latest_row_version` for broadcast +8. commit SQLite transaction +9. release DB writer lock +10. broadcast `tx:committed(term, seq, latestRowVersion, ...)` + +Delete tracking note: + +- tombstones are the delete source for `pullSince` key-level catch-up. +- tombstones are stateful per key (latest delete only), not append-only history. + +Recovery rule: + +- if commit succeeds but broadcast is missed, followers detect stale `latestSeq` via heartbeat and call `pullSince`. + +### A.2) Subset Invalidation Contract + +Followers maintain an in-memory registry of active loaded subsets per collection. + +Default: + +- `TARGETED_INVALIDATION_KEY_LIMIT = 128` + +On `tx:committed`: + +1. if `requiresFullReload` is true: + +- mark all active subsets for that collection dirty +- schedule debounced reload from local SQLite + +2. else if `changedKeys`/`deletedKeys` present and combined count <= `TARGETED_INVALIDATION_KEY_LIMIT`: + +- refresh only subsets that may contain those keys + +3. else: + +- mark all active subsets for that collection dirty +- schedule debounced reload from local SQLite + +This removes ambiguity around follower refresh behavior while keeping correctness first. + +### B) `loadSubset` Flow by Inferred Behavior + +#### When `sync` Is Present + +1. query local SQLite immediately +2. apply local rows +3. request leader `ensureRemoteSubset(...)` (online path) +4. leader syncs/writes/broadcasts commits +5. tabs refresh from SQLite on broadcast + +#### When `sync` Is Absent + +1. query local SQLite immediately +2. apply local rows +3. no remote ensure call +4. tab refresh remains local/broadcast-driven only + +### C) Hydrate Barrier (Both Modes) + +Problem: updates can arrive during local hydrate. + +Wrapper state per collection: + +- `isHydrating: boolean` +- `queuedTx: PersistedTx[]` +- `applyMutex` serializing write/apply + +Scope: + +- hydrate barrier is collection-scoped (not per-subset) because transactions can affect any active subset in that collection. + +Algorithm: + +1. `loadSubset` sets `isHydrating = true` +2. query cached rows from SQLite +3. apply local rows via `write({ type: 'update', ... })` +4. set `isHydrating = false` +5. flush queued tx in order + +### D) Duplicate-Key Safety (Sync-Present Path) + +To avoid `DuplicateKeySyncError` when cache overlaps remote snapshot: + +- local hydrate uses `update` only (never `insert`) +- remote `insert` payloads are normalized to `update` before DB `write` + +### E) Sync-Absent Mutation Persistence + +When `sync` is absent, mutation changes persist automatically, aligned with `localStorageCollectionOptions` behavior. + +`PersistedMutationEnvelope` shape: + +```ts +type PersistedMutationEnvelope = + | { + mutationId: string + type: 'insert' + key: string | number + value: Record + } + | { + mutationId: string + type: 'update' + key: string | number + value: Record + } + | { + mutationId: string + type: 'delete' + key: string | number + value: Record + } +``` + +- wrap `onInsert`, `onUpdate`, `onDelete` to persist SQLite changes automatically +- confirm optimistic operations through sync-confirm path after persistence +- for manual transactions, expose and use `utils.acceptMutations(transaction)` +- in browser multi-tab, non-leader tabs send local mutations to leader via `applyLocalMutations` +- leader must reply with `applyLocalMutations:res` so follower can confirm or rollback optimistic entries + +### F) Offline/Online Behavior + +- when `sync` is present: + - offline `loadSubset` resolves locally + - queued `ensureRemoteSubset` replays when online +- when `sync` is absent: + - unaffected by network state + +### G) Seq Gap Recovery + +On missing `(term, seq)`: + +1. use follower-tracked `lastSeenRowVersion` (from last applied commit or pull response) and request `pullSince(lastSeenRowVersion)` from current leader +2. if pull succeeds and `requiresFullReload` is true, mark collection subsets dirty +3. if pull succeeds with `changedKeys`/`deletedKeys`, run targeted subset invalidation +4. reload affected subsets from local SQLite (or all active subsets when required) +5. if pull fails, mark view stale and truncate/reload affected in-memory view +6. re-request loaded subsets +7. re-run remote ensure only when `sync` is present + +`pullSince` implementation rule: + +- `changedKeys` are derived from `c_` rows where `row_version > fromRowVersion` +- `deletedKeys` are derived from tombstones `t_` where `row_version > fromRowVersion` +- this computes a delta to latest state (not a full linear event history) +- if either result set exceeds invalidation limits, set `requiresFullReload: true` + +## SQLite Storage + Index Plan + +### Schema (Per Collection) + +Single table per collection: + +- `key` stored as canonical encoded text key (`s:` or `n:`) to preserve `1` vs `'1'` distinction +- `key` TEXT PRIMARY KEY +- `value` JSON string in `TEXT` +- `row_version` INTEGER NOT NULL (monotonic change version stamped by leader; per-transaction watermark shared by all rows touched in one committed tx) +- tombstone table per collection tracks latest delete state per key with row versions (`t_`) +- tombstone `deleted_at` stores deletion timestamp for diagnostics/observability; catch-up logic uses `row_version` + +Key encoding helpers (required): + +```ts +function encodeStorageKey(key: string | number): string { + if (typeof key === 'number') { + if (!Number.isFinite(key)) { + throw new Error('Invalid numeric key: key must be finite') + } + if (Object.is(key, -0)) { + return 'n:-0' + } + return `n:${key}` + } + return `s:${key}` +} + +function decodeStorageKey(encoded: string): string | number { + if (encoded === 'n:-0') { + return -0 + } + return encoded.startsWith('n:') ? Number(encoded.slice(2)) : encoded.slice(2) +} +``` + +Metadata tables: + +- `persisted_index_registry(collection_id, signature, sql, state, last_built_at, last_used_at)` +- `applied_tx(collection_id, term, seq, applied_at)` +- `collection_version(collection_id, latest_row_version)` for catch-up cursor +- `leader_term(collection_id, term, leader_id, updated_at)` for durable term monotonicity +- `schema_version(collection_id, version)` for clear-on-version-change behavior +- `collection_reset_epoch(collection_id, epoch)` for coordinated clear/reload signaling +- `collection_registry(collection_id, table_name)` for safe identifier mapping + +Identifier safety requirement: + +- never interpolate raw `collectionId` into SQL identifiers +- map `collectionId` to safe physical table names using hashed names (for example `c_`) +- store mapping in `collection_registry` + +Reference DDL: + +```sql +CREATE TABLE IF NOT EXISTS c_ ( + key TEXT PRIMARY KEY NOT NULL, + value TEXT NOT NULL, + row_version INTEGER NOT NULL +); + +CREATE TABLE IF NOT EXISTS t_ ( + key TEXT PRIMARY KEY NOT NULL, + row_version INTEGER NOT NULL, + deleted_at INTEGER NOT NULL +); + +CREATE TABLE IF NOT EXISTS collection_registry ( + collection_id TEXT PRIMARY KEY NOT NULL, + table_name TEXT UNIQUE NOT NULL +); + +CREATE TABLE IF NOT EXISTS persisted_index_registry ( + collection_id TEXT NOT NULL, + signature TEXT NOT NULL, + sql TEXT NOT NULL, + state TEXT NOT NULL, + last_built_at INTEGER, + last_used_at INTEGER, + PRIMARY KEY (collection_id, signature) +); + +CREATE TABLE IF NOT EXISTS applied_tx ( + collection_id TEXT NOT NULL, + term INTEGER NOT NULL, + seq INTEGER NOT NULL, + applied_at INTEGER NOT NULL, + PRIMARY KEY (collection_id, term, seq) +); + +CREATE TABLE IF NOT EXISTS collection_version ( + collection_id TEXT PRIMARY KEY NOT NULL, + latest_row_version INTEGER NOT NULL +); + +CREATE TABLE IF NOT EXISTS leader_term ( + collection_id TEXT PRIMARY KEY NOT NULL, + term INTEGER NOT NULL, + leader_id TEXT NOT NULL, + updated_at INTEGER NOT NULL +); + +CREATE TABLE IF NOT EXISTS schema_version ( + collection_id TEXT PRIMARY KEY NOT NULL, + version INTEGER NOT NULL +); + +CREATE TABLE IF NOT EXISTS collection_reset_epoch ( + collection_id TEXT PRIMARY KEY NOT NULL, + epoch INTEGER NOT NULL +); +``` + +### Persisted Index Signatures + +Main-thread `indexId` is not stable across tabs. Use stable signature: + +- `signature = hash(stableStringify({ expression, compareOptions, direction, nulls, stringSort, locale, ... }))` + +### Expression Indexes + +Indexes are created on demand from mirrored index specs, for example: + +- `CREATE INDEX IF NOT EXISTS idx__ ON c_(json_extract(value,'$.path'))` +- compound indexes use multiple expressions +- date/datetime predicates can use expression indexes over canonical extracted values (for example `datetime(json_extract(value,'$.dueAt'))`) + +`ensureIndex(...)` compiles index IR/spec to canonical SQL expression text for reliable planner usage. + +Reference query templates: + +```sql +-- Increment and read collection row version (inside txn) +INSERT INTO collection_version(collection_id, latest_row_version) +VALUES (?, 1) +ON CONFLICT(collection_id) DO UPDATE SET latest_row_version = latest_row_version + 1; + +SELECT latest_row_version FROM collection_version WHERE collection_id = ?; + +-- Upsert row +INSERT INTO c_(key, value, row_version) +VALUES (?, ?, ?) +ON CONFLICT(key) DO UPDATE SET + value = excluded.value, + row_version = excluded.row_version; + +-- Clear tombstone on re-insert/update +DELETE FROM t_ WHERE key = ?; + +-- Delete row +DELETE FROM c_ WHERE key = ?; + +-- Upsert tombstone for delete tracking +INSERT INTO t_(key, row_version, deleted_at) +VALUES (?, ?, ?) +ON CONFLICT(key) DO UPDATE SET + row_version = excluded.row_version, + deleted_at = excluded.deleted_at; + +-- Mark tx applied +INSERT OR IGNORE INTO applied_tx(collection_id, term, seq, applied_at) +VALUES (?, ?, ?, ?); +``` + +### Metadata Retention / Cleanup + +To prevent unbounded metadata growth: + +- `applied_tx`: keep sliding window per collection by seq/time. +- tombstones (`t_`) are per-key latest-delete state and are not version-pruned. +- tombstones are removed when the key is re-inserted/updated (same transaction as row upsert). + +Defaults: + +- `APPLIED_TX_SEQ_RETENTION = 10000` +- `APPLIED_TX_MAX_AGE_MS = 7 * 24 * 60 * 60 * 1000` + +### Partial Updates and Index Maintenance + +Updates may be partial (`rowUpdateMode: 'partial'` default). + +Adapters must: + +- read current row +- merge partial update before persist +- compute index old/new values from pre-merge and post-merge rows + +If `rowUpdateMode: 'full'` is configured, adapters can skip read/merge and write replacement rows. + +### Schema Version Policy (No Migrations) + +This plan does not implement structural schema migrations. + +Collection options include `persistence.schemaVersion: number`. + +Behavior on version mismatch: + +- sync-present path: + - default action: coordinated clear persisted state for that collection (rows + indexes + metadata), then rehydrate from remote sync +- sync-absent path: + - default action: throw `PersistenceSchemaVersionMismatchError` + - optional opt-in: allow clear and restart with empty local state + +Coordinated clear sequence (sync-present path): + +1. acquire `tsdb:writer:` + +- note: this serializes writes across the DB file, so schema reset briefly blocks writes for all collections + +2. begin SQLite transaction +3. clear collection rows/tombstones/index metadata +4. reset collection cursor in `collection_version` (delete row or set `latest_row_version = 0`) +5. update `schema_version` +6. increment `reset_epoch` +7. commit transaction +8. broadcast `collection:reset(schemaVersion, resetEpoch)` + +Follower behavior on `collection:reset`: + +- reset tracked `lastSeenRowVersion` for that collection to `0` +- drop in-memory rows for that collection +- clear active subset cache +- re-request loaded subsets + +Guidance for sync-absent collections: + +- prefer additive/backward-compatible schema changes with value-level fallbacks +- because values are JSON payloads, additive evolution is expected to be the common safe path + +### `loadSubset` Query Planning + +v1 pushdown support: + +- `eq`, `in`, `gt/gte/lt/lte`, `like` +- logical composition with both `AND` and `OR` (push down when each branch is pushdown-safe; otherwise fallback) +- `IN` is required in v1 because query-engine incremental join loading depends on it + - handle empty, single, and large `IN` lists correctly + - chunk very large lists to respect SQLite parameter limits when needed +- date/datetime comparisons on JSON fields serialized as canonical ISO-8601 UTC strings + - planner may use canonical string comparison where valid + - planner may compile to SQLite date functions (`datetime`, `strftime`) when normalization is required +- index-aligned `orderBy` + +Unsupported predicate fragments load a superset; query engine filters remainder. + +## Adapter Interfaces + +`PersistedTx` (used by `applyCommittedTx`) shape: + +```ts +type PersistedTx = { + txId: string + term: number + seq: number + rowVersion: number + mutations: Array< + | { type: 'insert'; key: TKey; value: T } + | { type: 'update'; key: TKey; value: T } + | { type: 'delete'; key: TKey; value: T } + > +} +``` + +### Persistence Adapter + +```ts +export interface PersistenceAdapter< + T extends object, + TKey extends string | number, +> { + // Read path (all tabs / all runtimes) + loadSubset( + collectionId: string, + options: LoadSubsetOptions, + ctx?: { requiredIndexSignatures?: string[] }, + ): Promise> + + // Write path (leader for this collection, or DO instance) + applyCommittedTx( + collectionId: string, + tx: PersistedTx, + ): Promise + + // Index management + ensureIndex( + collectionId: string, + signature: string, + spec: PersistedIndexSpec, + ): Promise + + // Optional: some adapters handle index cleanup lazily or via collection reset flows. + markIndexRemoved?(collectionId: string, signature: string): Promise +} +``` + +`PersistedIndexSpec` must be serializable and derived from index lifecycle events. + +### SQLite Driver Interface + +```ts +export interface SQLiteDriver { + exec(sql: string): Promise + query(sql: string, params?: readonly unknown[]): Promise + run(sql: string, params?: readonly unknown[]): Promise + transaction(fn: () => Promise): Promise +} +``` + +Driver adaptation note: + +- sync drivers (for example `better-sqlite3`) are adapted via thin `Promise.resolve(...)` wrappers. +- this keeps one core async adapter path across runtimes; sync overhead is accepted for API consistency in v1. + +## Package Plan + +1. `@tanstack/db-sqlite-persisted-collection-core` +2. `@tanstack/db-browser-wa-sqlite-persisted-collection` +3. `@tanstack/db-node-sqlite-persisted-collection` +4. `@tanstack/db-react-native-sqlite-persisted-collection` (RN + Expo) +5. `@tanstack/db-electron-sqlite-persisted-collection` +6. `@tanstack/db-cloudflare-do-sqlite-persisted-collection` + +SQLite core package contents (combined): + +- `persistedCollectionOptions(...)` with inferred behavior based on presence of `sync` +- stable signature/hash utilities +- coordinator protocol types +- sync-absent mutation persistence helpers (`acceptMutations` flow) +- shared `SQLiteCoreAdapter(driver)` +- SQL expression compiler for index/query pushdown +- index registry management +- in-memory adapter + in-memory coordinator for unit tests + +Future packaging note: + +- if a non-SQLite backend is introduced later, split backend-agnostic surface out of this package at that time. + +Cloudflare DO package contents: + +- adapter binding to DO SQLite-backed storage APIs (for code executing inside DO) +- DO-friendly wrapper that defaults to `SingleProcessCoordinator` and omits browser election paths +- optional helper for mapping `collectionId` to table naming and schema-version handling + +Electron package contents: + +- thin wrapper over node sqlite package semantics +- IPC transport between renderer calls and main-process persistence execution +- does not duplicate node adapter/core logic; reuses node package implementation behind the IPC boundary + +## Implementation Phases + +### Phase 0: API + Runtime Feasibility + +1. Finalize `persistedCollectionOptions` inference API (`sync` present vs absent). +2. Confirm Cloudflare DO adapter surface and runtime constraints. +3. Finalize coordinator protocol (`rpc`, `tx`, `leader`, `(term, seq)`), with browser multi-tab parts phase-gated. +4. Finalize key encoding and identifier hashing rules. +5. Finalize package boundaries around SQLite-only core. +6. Define staged rollout gates (single-process first, browser multi-tab last). + +Deliverable: finalized API, package plan, capability matrix, and protocol spec. + +### Phase 1: Add Index Lifecycle Events to `@tanstack/db` + +1. Extend collection events with: + +- `index:added` +- `index:removed` + +2. Update `CollectionIndexesManager` to emit stable index metadata. +3. Add index removal API (`removeIndex(...)`) and emit `index:removed`. + +Deliverable: index lifecycle observable and stable across tabs. + +### Phase 2: Core Persisted Wrapper (Inferred Behavior) + +1. Implement `sync`-present wrapper over `sync.sync(params)`. +2. Implement sync-absent behavior without required wrapped sync. +3. Add hydrate barrier + queued tx behavior. +4. Normalize remote inserts to updates (when `sync` is present). +5. Implement automatic mutation persistence wrappers (when `sync` is absent). +6. Add `utils.acceptMutations(transaction)` support for manual transactions. +7. Wire coordinator RPC (`ensureRemoteSubset`, `ensurePersistedIndex`, `applyLocalMutations`). +8. Implement seq-gap recovery path. +9. Implement inference edge-case validation (`InvalidSyncConfigError`). + +Deliverable: core wrapper passes in-memory tests for both inferred paths. + +### Phase 3: SQLite Core Adapter + +1. Implement `applyCommittedTx`, `ensureIndex`, `loadSubset` SQL pushdown (`eq`, `in`, range, `like`, `AND`, `OR`, date/datetime predicates). +2. Implement partial update merge semantics. +3. Implement `leader_term`, `schema_version`, and identifier registry tables. +4. Implement schema-version mismatch behavior (clear vs error by path). +5. Implement applied_tx pruning jobs. +6. Add adapter contract tests in node sqlite runtime. + +Deliverable: SQLite adapter contract passing in node. + +### Phase 4: Node + Electron + +1. Implement node wrapper over `better-sqlite3`. +2. Implement electron main-process ownership + renderer IPC over `better-sqlite3`. +3. Run shared contract/integration suites. + +Deliverable: node/electron parity with core semantics. + +### Phase 5: React Native + Expo + +1. Implement shared mobile package over `op-sqlite`. +2. Provide RN/Expo-specific entrypoints only where host bootstrapping differs. +3. Validate mobile lifecycle and transaction semantics on both RN and Expo. + +Deliverable: unified RN/Expo mobile package passes contract tests. + +### Phase 6: Cloudflare Durable Objects + +1. Implement DO SQLite adapter package. +2. Provide helper for per-object schema initialization and schema-version checks. +3. Support both inferred wrapper paths inside DO runtime (`sync` present or absent), with in-process execution only. +4. Add integration tests using Workers/DO test harness. + +Deliverable: DB and persistence running in-process in Durable Objects with SQLite-backed storage. + +### Phase 7: Browser Single-Tab (`wa-sqlite`, No Election) + +1. Implement OPFS driver (`OPFSCoopSyncVFS`). +2. Implement browser adapter path with `SingleProcessCoordinator` semantics for single-tab usage. +3. Validate offline-first read/write path without BroadcastChannel/Web Locks dependencies. +4. Add browser single-tab integration tests. + +Deliverable: stable browser persistence for single-tab sessions. + +### Phase 8: Browser Multi-Tab Coordinator (Final Phase) + +1. Implement Web Locks + Visibility + BroadcastChannel coordinator. +2. Implement per-collection leader/follower behavior for both inferred paths. +3. Implement follower local mutation RPC to leader with ack/rollback semantics. +4. Implement DB write serialization lock (`tsdb:writer:`) and busy retry policy. +5. Add Playwright multi-tab tests. + +Deliverable: stable browser local-first multi-tab behavior when `sync` is present or absent. + +## Testing Strategy + +### Unit Tests (Core Wrapper) + +1. Index lifecycle: + +- `createIndex` emits `index:added` with stable signature +- `removeIndex` emits `index:removed` + +2. Local hydrate safety: + +- hydrate uses `update` only +- remote inserts normalized to update + +3. Hydrate barrier: + +- tx during hydrate is queued then flushed in order + +4. Sync-present offline/online queue: + +- offline local resolve +- queued remote ensures replay online + +5. Sync-absent mutation persistence: + +- insert/update/delete auto-persist +- manual transaction `acceptMutations` persists and confirms + +6. Seq-gap recovery: + +- missing seq triggers `pullSince`; fallback to stale/reload/re-ensure + +7. Inference validation: + +- invalid `sync` shape throws `InvalidSyncConfigError` + +8. Key encoding: + +- `1` and `'1'` persist distinctly and round-trip correctly + +9. Local mutation acking: + +- `applyLocalMutations:res.acceptedMutationIds` maps to submitted `mutationId`s + +### Adapter Contract Tests + +Run same suite against: + +- in-memory adapter +- browser `wa-sqlite` adapter +- node sqlite adapter +- electron wrapper (`better-sqlite3`) and unified mobile wrapper (`op-sqlite`) where harness supports +- cloudflare durable object sqlite adapter + +Covers: + +- `ensureIndex` + `loadSubset` index-path usage +- pushdown parity for `AND`/`OR`, `IN` (including empty/single/large lists), `LIKE`, and date/datetime comparisons +- `applyCommittedTx` row/index correctness +- idempotency and replay handling on `(term, seq)` +- monotonic `row_version` behavior and `pullSince` cursor correctness +- `pullSince` discriminated response shape correctness (`requiresFullReload=true` returns no key lists) +- tombstone-based delete catch-up correctness +- per-key tombstone state semantics (latest delete only) correctness +- applied_tx pruning does not break row-version catch-up correctness +- schema reset clears `collection_version` cursor and follower resets tracked `lastSeenRowVersion` to `0` +- sync-absent auto-persist semantics +- schema-version mismatch behavior (clear vs error by path) +- identifier safety mapping (unsafe collectionId still produces safe physical table names) + +### Browser Single-Tab Integration Tests (Phase 7) + +1. OPFS-backed init and reopen behavior. +2. Local-first `loadSubset` and mutation persistence correctness. +3. Sync-present offline local path and reconnect replay without leader election. +4. No dependency on BroadcastChannel/Web Locks for correctness in single-tab mode. + +### Browser Multi-Tab Integration Tests (Playwright, Phase 8) + +1. Two tabs with different collection leaders: + +- tab A leads collection X +- tab B leads collection Y + +2. Local reads do not round-trip through leader. +3. Sync-absent follower mutation is serialized via leader and persisted. +4. Auto-index creates persisted index and speeds repeated lookups. +5. Leader handoff on visibility change / tab close. +6. Sync-present offline local-first and reconnect catch-up. +7. Cross-collection leaders contend for DB writes without correctness loss (`tsdb:writer` lock test). +8. Commit-broadcast gap recovers via heartbeat `latestSeq` + `pullSince`. + +### Cloudflare Durable Objects Integration Tests + +1. Schema init + schema-version mismatch behavior per DO instance. +2. `loadSubset` + index pushdown correctness. +3. Sync-absent mutation persistence correctness in DO runtime. +4. Restart/rehydration behavior with persisted SQLite state. +5. No browser coordinator path in DO (`SingleProcessCoordinator` only). + +### Corruption Recovery Tests + +1. Corrupted sqlite open path triggers integrity failure handling. +2. Sync-present path clears persistence and rehydrates from remote. +3. Sync-absent path raises `PersistenceCorruptionError` unless explicit reset is requested. + +## Agent Guard Rails (Implementation + Testing) + +These are mandatory rules for agents implementing this plan. + +1. No implementation step is complete without tests in the same change set. + +- bug fixes must include a regression test +- new behavior must include positive and negative-path coverage + +2. Do not progress to the next phase until the current phase exit criteria and tests are green. + +- phase completion requires local pass and CI pass for the phase test scope + +3. Operator support must be proven on both paths: + +- pushdown path (SQL execution) +- fallback path (superset load + in-memory filtering) +- applies to `IN`, `AND`, `OR`, `LIKE`, and date/datetime predicates + +4. `IN` is a v1 hard requirement because incremental join loading depends on it. + +- test `IN` with empty lists, single value lists, and large lists +- test parameter chunking behavior for large lists against SQLite parameter limits + +5. Date/datetime support requires canonical serialization and deterministic tests. + +- JSON date values must use canonical ISO-8601 UTC strings +- include timezone/offset boundary tests +- test both lexical comparison mode and SQLite date-function mode when normalization is required + +6. Any change to ordering, leadership, mutation routing, or replay must include failure-path tests. + +- dropped broadcast handling +- heartbeat timeout and takeover +- leader stepdown/lock loss +- retry/idempotency behavior for mutation RPC + +7. Cross-runtime parity is required for shared behavior. + +- if behavior is intended to be shared, contract tests must pass across supported adapters +- runtime-specific deviations must be documented and explicitly tested + +8. Schema safety and recovery semantics are non-optional. + +- sync-present mismatch path must prove clear + rehydrate behavior +- sync-absent mismatch path must prove explicit error behavior (unless opt-in reset path is enabled) + +9. Never loosen correctness for optimization without equivalence coverage. + +- any pushdown/performance optimization must include query-equivalence tests against fallback behavior + +## Failure Modes and Handling + +1. OPFS unavailable in browser: + +- when `sync` is absent: throw `PersistenceUnavailableError` at initialization +- when `sync` is present: default to disabling persistence for session and run remote sync path only +- expose capability/error to application so users can decide whether to hard-fail UI + +2. Invalid inferred sync config: + +- if `sync` key exists but is not a valid `SyncConfig`, throw `InvalidSyncConfigError` + +3. No current leader for a collection in browser: + +- local `loadSubset` still reads SQLite +- queue/timeout remote ensure or local-mutation RPC until election completes + +4. Leader crash or tab close: + +- Web Lock releases +- follower acquires leadership and resumes responsibilities + +5. Broadcast gap: + +- follower triggers collection recovery +- attempt `pullSince` catch-up first +- fallback to reload local subset and re-ensure when `sync` is present + +6. Durable Object instance restart: + +- in-memory state is rebuilt from persistent SQLite storage +- schema-version checks and clear/error policy run on initialization path + +7. Coordinated schema reset while tabs are active: + +- leader broadcasts `collection:reset` +- followers drop in-memory cache for that collection and reload subsets + +8. SQLite corruption / integrity failure: + +- detect on open/init (initial query failure or optional integrity check path) +- sync-present: clear persisted state and rehydrate from remote +- sync-absent: throw `PersistenceCorruptionError` and require explicit user reset +- expose `resetPersistence({ collectionId })` utility for app-level recovery + +## Risks and Mitigations + +1. Risk: browser differences in OPFS/Web Locks/visibility behavior. + Mitigation: capability matrix + conservative fallback behavior. + +2. Risk: cross-collection write contention causes `SQLITE_BUSY`. + Mitigation: serialize physical writes via `tsdb:writer:` + bounded retry/backoff. + +3. Risk: WASM startup overhead. + Mitigation: lazy init + connection reuse per tab. + +4. Risk: SQL pushdown mismatch vs query-engine semantics. + Mitigation: equivalence tests + fallback filtering for unsupported fragments. + +5. Risk: driver divergence across runtimes. + Mitigation: strict adapter contract suite and minimal driver interface. + +6. Risk: sync-absent follower mutation queuing during leader churn. + Mitigation: durable RPC retry/backoff and idempotent mutation envelopes. + +## Implementation Readiness Checklist + +1. API: + +- overload signatures compile and infer correctly for `sync` present/absent +- runtime branch matches compile-time discrimination (`options.sync != null`) + +2. Core semantics: + +- hydrate barrier + queued tx ordering implemented +- insert-to-update normalization implemented for sync-present path +- sync-absent auto-persist wrappers implemented + +3. Coordinator: + +- lock acquisition, heartbeat, timeout, and stepdown logic implemented +- protocol envelope and RPC correlation/idempotency implemented +- heartbeat carries `latestSeq` and followers perform `pullSince` catch-up + +4. SQLite adapter: + +- DDL initialization and schema-version checks implemented +- key encoding/decoding preserves string vs number identity +- identifier hashing/mapping prevents unsafe SQL identifiers +- pushdown planner + fallback filtering implemented +- applied tx idempotency table enforced +- tombstone per-key delete-state tracking implemented +- durable `leader_term` monotonicity and schema-version policy implemented +- corruption detection and reset utility implemented + +5. Runtime adapters: + +- browser OPFS adapter passes single-tab integration tests (Phase 7) +- browser multi-tab coordinator/election tests pass (Phase 8) +- node/electron/mobile (rn+expo) adapters passing contract suite +- cloudflare DO adapter passing integration suite + +6. Test coverage: + +- unit + contract + browser integration + DO integration green in CI + +## Open Decisions + +1. Electron renderer read policy: direct read vs strict main-process proxy. +2. Whether `ensureRemoteSubset` is always background or optionally awaited. + +Blocking-before-implementation: + +- none (runtime driver choices and package shape are fixed in this plan: Node/Electron `better-sqlite3`, `@tanstack/db-react-native-sqlite-persisted-collection` for RN/Expo via `op-sqlite`) + +Blocking-before-browser phases: + +- Phase 7: verify OPFS + `FileSystemSyncAccessHandle` in target evergreen browsers. +- Phase 8: verify Web Locks in the same target browsers. + +Non-blocking (can be phased after initial implementation): + +- electron renderer read policy refinements +- awaited vs background `ensureRemoteSubset` behavior toggle + +## Notes and Implications + +1. First-time index build has unavoidable cost; subsequent indexed reads are fast. +2. Local performance depends on index coverage; use `autoIndex` or explicit `createIndex(...)` on hot paths. +3. Reads never round-trip through leader; leader handles write serialization and sync responsibilities. +4. Sync-absent usage provides a persistence-first option without requiring remote sync wiring. +5. `loadSubset` currently returns materialized arrays; cursor/streaming read API can be explored after v1. diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/README.md b/packages/db-cloudflare-do-sqlite-persisted-collection/README.md new file mode 100644 index 000000000..2393faea0 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/README.md @@ -0,0 +1,48 @@ +# @tanstack/db-cloudflare-do-sqlite-persisted-collection + +Thin SQLite persistence for Cloudflare Durable Objects. + +## Public API + +- `createCloudflareDOSQLitePersistence(...)` +- `persistedCollectionOptions(...)` (re-exported from core) + +## Quick start + +```ts +import { createCollection } from '@tanstack/db' +import { + createCloudflareDOSQLitePersistence, + persistedCollectionOptions, +} from '@tanstack/db-cloudflare-do-sqlite-persisted-collection' + +type Todo = { + id: string + title: string + completed: boolean +} + +export class TodosObject extends DurableObject { + persistence = createCloudflareDOSQLitePersistence({ + // Pass full storage to use native DO transaction support. + storage: this.ctx.storage, + }) + + todos = createCollection( + persistedCollectionOptions({ + id: `todos`, + getKey: (todo) => todo.id, + persistence: this.persistence, + schemaVersion: 1, // Per-collection schema version + }), + ) +} +``` + +## Notes + +- One shared persistence instance can serve multiple collections. +- Mode defaults are inferred from collection usage: + - sync config present => `sync-present-reset` + - no sync config => `sync-absent-error` +- You can still override with `schemaMismatchPolicy` if needed. diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/cloudflare-do-runtime-bridge.e2e.test.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/cloudflare-do-runtime-bridge.e2e.test.ts new file mode 100644 index 000000000..0cb1c1435 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/cloudflare-do-runtime-bridge.e2e.test.ts @@ -0,0 +1,422 @@ +import { mkdtempSync, rmSync } from 'node:fs' +import { tmpdir } from 'node:os' +import { dirname, join } from 'node:path' +import { setTimeout as delay } from 'node:timers/promises' +import { fileURLToPath } from 'node:url' +import { spawn } from 'node:child_process' +import { describe, expect, it } from 'vitest' +import { runRuntimeBridgeE2EContractSuite } from '../../db-sqlite-persisted-collection-core/tests/contracts/runtime-bridge-e2e-contract' +import type { + RuntimeBridgeE2EContractError, + RuntimeBridgeE2EContractHarness, + RuntimeBridgeE2EContractHarnessFactory, + RuntimeBridgeE2EContractTodo, +} from '../../db-sqlite-persisted-collection-core/tests/contracts/runtime-bridge-e2e-contract' + +type RuntimeProcessHarness = { + baseUrl: string + restart: () => Promise + stop: () => Promise +} + +type WranglerRuntimeResponse = + | { + ok: true + rows?: TPayload + } + | { + ok: false + error: RuntimeBridgeE2EContractError + } + +const packageDirectory = dirname(fileURLToPath(import.meta.url)) +const wranglerConfigPath = join(packageDirectory, `fixtures`, `wrangler.toml`) + +async function getAvailablePort(): Promise { + const netModule = await import('node:net') + return new Promise((resolve, reject) => { + const server = netModule.createServer() + server.listen(0, `127.0.0.1`, () => { + const address = server.address() + if (!address || typeof address === `string`) { + server.close() + reject(new Error(`Unable to allocate an available local port`)) + return + } + const selectedPort = address.port + server.close((error) => { + if (error) { + reject(error) + return + } + resolve(selectedPort) + }) + }) + server.on(`error`, reject) + }) +} + +function createRuntimeError( + message: string, + stderr: string, + stdout: string, +): Error { + return new Error([message, `stderr=${stderr}`, `stdout=${stdout}`].join(`\n`)) +} + +async function stopWranglerProcess( + child: ReturnType | undefined, +): Promise { + if (!child || child.exitCode !== null) { + return + } + + child.kill(`SIGTERM`) + const closed = await Promise.race([ + new Promise((resolve) => { + child.once(`close`, () => resolve(true)) + }), + delay(5_000).then(() => false), + ]) + + if (closed) { + return + } + + child.kill(`SIGKILL`) + await new Promise((resolve) => { + child.once(`close`, () => resolve()) + }) +} + +async function startWranglerRuntime(options: { + persistPath: string + syncEnabled?: boolean + schemaVersion?: number + collectionId?: string +}): Promise { + let child: ReturnType | undefined + let stdoutBuffer = `` + let stderrBuffer = `` + const port = await getAvailablePort() + + const spawnProcess = async (): Promise => { + const runtimeVarEntries = [ + [ + `PERSISTENCE_WITH_SYNC`, + options.syncEnabled !== undefined + ? String(options.syncEnabled) + : undefined, + ], + [ + `PERSISTENCE_SCHEMA_VERSION`, + options.schemaVersion !== undefined + ? String(options.schemaVersion) + : undefined, + ], + [`PERSISTENCE_COLLECTION_ID`, options.collectionId], + ].filter((entry): entry is [string, string] => entry[1] !== undefined) + + const wranglerArgs = [ + `exec`, + `wrangler`, + `dev`, + `--local`, + `--ip`, + `127.0.0.1`, + `--port`, + String(port), + `--persist-to`, + options.persistPath, + `--config`, + wranglerConfigPath, + ...runtimeVarEntries.flatMap(([key, value]) => [ + `--var`, + `${key}:${value}`, + ]), + ] + + child = spawn(`pnpm`, wranglerArgs, { + cwd: packageDirectory, + env: { + ...process.env, + CI: `1`, + WRANGLER_SEND_METRICS: `false`, + }, + stdio: [`ignore`, `pipe`, `pipe`], + }) + + if (!child.stdout || !child.stderr) { + throw new Error(`Unable to capture wrangler dev process output streams`) + } + + child.stdout.on(`data`, (chunk: Buffer) => { + stdoutBuffer += chunk.toString() + }) + child.stderr.on(`data`, (chunk: Buffer) => { + stderrBuffer += chunk.toString() + }) + + const baseUrl = `http://127.0.0.1:${String(port)}` + const startAt = Date.now() + while (Date.now() - startAt < 45_000) { + if (child.exitCode !== null) { + throw createRuntimeError( + `Wrangler dev exited before becoming healthy`, + stderrBuffer, + stdoutBuffer, + ) + } + + try { + const healthResponse = await fetch(`${baseUrl}/health`) + if (healthResponse.ok) { + return + } + } catch { + // Runtime may still be starting. + } + + await delay(250) + } + + throw createRuntimeError( + `Timed out waiting for wrangler dev runtime`, + stderrBuffer, + stdoutBuffer, + ) + } + + await spawnProcess() + + return { + baseUrl: `http://127.0.0.1:${String(port)}`, + restart: async () => { + await stopWranglerProcess(child) + stdoutBuffer = `` + stderrBuffer = `` + await spawnProcess() + }, + stop: async () => { + await stopWranglerProcess(child) + }, + } +} + +async function postJson( + baseUrl: string, + path: string, + body: unknown, +): Promise> { + const response = await fetch(`${baseUrl}${path}`, { + method: `POST`, + headers: { + 'content-type': `application/json`, + }, + body: JSON.stringify(body), + }) + + const parsed = (await response.json()) as WranglerRuntimeResponse + return parsed +} + +function assertRuntimeError( + response: WranglerRuntimeResponse, +): RuntimeBridgeE2EContractError { + if (!response.ok) { + return response.error + } + + throw new Error(`Expected runtime call to fail, but it succeeded`) +} + +function assertRuntimeSuccess( + response: WranglerRuntimeResponse, +): TPayload | undefined { + if (response.ok) { + return response.rows + } + + throw new Error(`${response.error.name}: ${response.error.message}`) +} + +const createHarness: RuntimeBridgeE2EContractHarnessFactory = () => { + const tempDirectory = mkdtempSync(join(tmpdir(), `db-cloudflare-do-e2e-`)) + const persistPath = join(tempDirectory, `wrangler-state`) + const collectionId = `todos` + let nextSequence = 1 + const runtimePromise = startWranglerRuntime({ + persistPath, + }) + + const harness: RuntimeBridgeE2EContractHarness = { + writeTodoFromClient: async (todo: RuntimeBridgeE2EContractTodo) => { + const runtime = await runtimePromise + const result = await postJson(runtime.baseUrl, `/write-todo`, { + collectionId, + todo, + txId: `tx-${nextSequence}`, + seq: nextSequence, + rowVersion: nextSequence, + }) + nextSequence++ + + if (!result.ok) { + throw new Error(`${result.error.name}: ${result.error.message}`) + } + }, + loadTodosFromClient: async (targetCollectionId?: string) => { + const runtime = await runtimePromise + const result = await postJson< + Array<{ key: string; value: RuntimeBridgeE2EContractTodo }> + >(runtime.baseUrl, `/load-todos`, { + collectionId: targetCollectionId ?? collectionId, + }) + if (!result.ok) { + throw new Error(`${result.error.name}: ${result.error.message}`) + } + return result.rows ?? [] + }, + loadUnknownCollectionErrorFromClient: + async (): Promise => { + const runtime = await runtimePromise + const result = await postJson( + runtime.baseUrl, + `/load-unknown-collection-error`, + { + collectionId: `missing`, + }, + ) + if (result.ok) { + throw new Error( + `Expected unknown collection request to fail, but it succeeded`, + ) + } + return result.error + }, + restartHost: async () => { + const runtime = await runtimePromise + await runtime.restart() + }, + cleanup: async () => { + try { + const runtime = await runtimePromise + await runtime.stop() + } finally { + rmSync(tempDirectory, { recursive: true, force: true }) + } + }, + } + + return harness +} + +runRuntimeBridgeE2EContractSuite( + `cloudflare durable object runtime bridge e2e (wrangler local)`, + createHarness, + { + testTimeoutMs: 90_000, + }, +) + +describe(`cloudflare durable object schema mismatch behavior (wrangler local)`, () => { + it(`throws on schema mismatch in sync-absent mode`, async () => { + const tempDirectory = mkdtempSync( + join(tmpdir(), `db-cloudflare-do-local-mismatch-e2e-`), + ) + const persistPath = join(tempDirectory, `wrangler-state`) + const collectionId = `todos` + let runtime = await startWranglerRuntime({ + persistPath, + syncEnabled: false, + schemaVersion: 1, + collectionId, + }) + + try { + const writeResult = await postJson(runtime.baseUrl, `/write-todo`, { + collectionId, + txId: `tx-1`, + seq: 1, + rowVersion: 1, + todo: { + id: `local-1`, + title: `Local mode row`, + score: 10, + }, + }) + assertRuntimeSuccess(writeResult) + } finally { + await runtime.stop() + } + + runtime = await startWranglerRuntime({ + persistPath, + syncEnabled: false, + schemaVersion: 2, + collectionId, + }) + try { + const loadResult = await postJson< + Array<{ key: string; value: RuntimeBridgeE2EContractTodo }> + >(runtime.baseUrl, `/load-todos`, { + collectionId, + }) + const runtimeError = assertRuntimeError(loadResult) + expect(runtimeError.message).toContain(`Schema version mismatch`) + } finally { + await runtime.stop() + rmSync(tempDirectory, { recursive: true, force: true }) + } + }, 90_000) + + it(`resets collection on schema mismatch in sync-present mode`, async () => { + const tempDirectory = mkdtempSync( + join(tmpdir(), `db-cloudflare-do-sync-mismatch-e2e-`), + ) + const persistPath = join(tempDirectory, `wrangler-state`) + const collectionId = `todos` + let runtime = await startWranglerRuntime({ + persistPath, + syncEnabled: true, + schemaVersion: 1, + collectionId, + }) + + try { + const writeResult = await postJson(runtime.baseUrl, `/write-todo`, { + collectionId, + txId: `tx-1`, + seq: 1, + rowVersion: 1, + todo: { + id: `sync-1`, + title: `Sync mode row`, + score: 20, + }, + }) + assertRuntimeSuccess(writeResult) + } finally { + await runtime.stop() + } + + runtime = await startWranglerRuntime({ + persistPath, + syncEnabled: true, + schemaVersion: 2, + collectionId, + }) + try { + const loadResult = await postJson< + Array<{ key: string; value: RuntimeBridgeE2EContractTodo }> + >(runtime.baseUrl, `/load-todos`, { + collectionId, + }) + const rows = assertRuntimeSuccess(loadResult) ?? [] + expect(rows).toEqual([]) + } finally { + await runtime.stop() + rmSync(tempDirectory, { recursive: true, force: true }) + } + }, 90_000) +}) diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/fixtures/worker.mjs b/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/fixtures/worker.mjs new file mode 100644 index 000000000..60b863e40 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/fixtures/worker.mjs @@ -0,0 +1,286 @@ +// @ts-nocheck +import { DurableObject } from 'cloudflare:workers' +import { createCollection } from '../../../db/dist/esm/index.js' +import { + createCloudflareDOSQLitePersistence, + persistedCollectionOptions, +} from '../../dist/esm/index.js' + +const DEFAULT_COLLECTION_ID = `todos` +const DEFAULT_SCHEMA_VERSION = 1 + +function resolveCollectionPersistence({ + persistence, + collectionId, + syncEnabled, + schemaVersion, +}) { + const mode = syncEnabled ? `sync-present` : `sync-absent` + return ( + persistence.resolvePersistenceForCollection?.({ + collectionId, + mode, + schemaVersion, + }) ?? + persistence.resolvePersistenceForMode?.(mode) ?? + persistence + ) +} + +function parseSyncEnabled(rawValue) { + if (rawValue == null) { + return false + } + + const normalized = String(rawValue).toLowerCase() + if (normalized === `1` || normalized === `true`) { + return true + } + if (normalized === `0` || normalized === `false`) { + return false + } + + throw new Error(`Invalid PERSISTENCE_WITH_SYNC "${String(rawValue)}"`) +} + +function parseSchemaVersion(rawSchemaVersion) { + if (rawSchemaVersion == null) { + return DEFAULT_SCHEMA_VERSION + } + const parsed = Number(rawSchemaVersion) + if (Number.isInteger(parsed) && parsed >= 0) { + return parsed + } + throw new Error( + `Invalid PERSISTENCE_SCHEMA_VERSION "${String(rawSchemaVersion)}"`, + ) +} + +function jsonResponse(status, body) { + return new Response(JSON.stringify(body), { + status, + headers: { + 'content-type': 'application/json', + }, + }) +} + +function serializeError(error) { + if (error && typeof error === `object`) { + const maybeCode = error.code + return { + name: typeof error.name === `string` ? error.name : `Error`, + message: + typeof error.message === `string` + ? error.message + : `Unknown Cloudflare DO runtime error`, + code: typeof maybeCode === `string` ? maybeCode : undefined, + } + } + + return { + name: `Error`, + message: `Unknown Cloudflare DO runtime error`, + code: undefined, + } +} + +function createUnknownCollectionError(collectionId) { + const error = new Error( + `Unknown cloudflare durable object persistence collection "${collectionId}"`, + ) + error.name = `UnknownCloudflareDOPersistenceCollectionError` + error.code = `UNKNOWN_COLLECTION` + return error +} + +export class PersistenceObject extends DurableObject { + constructor(ctx, env) { + super(ctx, env) + this.collectionId = env.PERSISTENCE_COLLECTION_ID ?? DEFAULT_COLLECTION_ID + this.syncEnabled = parseSyncEnabled(env.PERSISTENCE_WITH_SYNC) + this.schemaVersion = parseSchemaVersion(env.PERSISTENCE_SCHEMA_VERSION) + this.persistence = createCloudflareDOSQLitePersistence({ + storage: this.ctx.storage, + }) + this.collectionPersistence = resolveCollectionPersistence({ + persistence: this.persistence, + collectionId: this.collectionId, + syncEnabled: this.syncEnabled, + schemaVersion: this.schemaVersion, + }) + this.ready = this.collectionPersistence.adapter.loadSubset( + this.collectionId, + { + limit: 0, + }, + ) + + const baseCollectionOptions = { + id: this.collectionId, + schemaVersion: this.schemaVersion, + getKey: (todo) => todo.id, + persistence: this.persistence, + } + this.collection = createCollection( + this.syncEnabled + ? persistedCollectionOptions({ + ...baseCollectionOptions, + sync: { + sync: ({ markReady }) => { + markReady() + }, + }, + }) + : persistedCollectionOptions(baseCollectionOptions), + ) + this.collectionReady = this.collection.stateWhenReady() + } + + async fetch(request) { + const url = new URL(request.url) + + try { + if (request.method === `GET` && url.pathname === `/health`) { + return jsonResponse(200, { + ok: true, + }) + } + + await this.ready + + if (request.method === `GET` && url.pathname === `/runtime-config`) { + return jsonResponse(200, { + ok: true, + collectionId: this.collectionId, + mode: this.syncEnabled ? `sync` : `local`, + syncEnabled: this.syncEnabled, + schemaVersion: this.schemaVersion, + }) + } + + const requestBody = await request.json() + const collectionId = requestBody.collectionId ?? this.collectionId + + if (request.method === `POST` && url.pathname === `/write-todo`) { + if (collectionId !== this.collectionId) { + throw createUnknownCollectionError(collectionId) + } + if (this.syncEnabled) { + const txId = + typeof requestBody.txId === `string` + ? requestBody.txId + : crypto.randomUUID() + const seq = + typeof requestBody.seq === `number` ? requestBody.seq : Date.now() + const rowVersion = + typeof requestBody.rowVersion === `number` + ? requestBody.rowVersion + : seq + await this.collectionPersistence.adapter.applyCommittedTx( + collectionId, + { + txId, + term: 1, + seq, + rowVersion, + mutations: [ + { + type: `insert`, + key: requestBody.todo.id, + value: requestBody.todo, + }, + ], + }, + ) + + return jsonResponse(200, { + ok: true, + }) + } + await this.collectionReady + const tx = this.collection.insert(requestBody.todo) + await tx.isPersisted.promise + + return jsonResponse(200, { + ok: true, + }) + } + + if (request.method === `POST` && url.pathname === `/load-todos`) { + if (collectionId !== this.collectionId) { + throw createUnknownCollectionError(collectionId) + } + if (this.syncEnabled) { + const rows = await this.collectionPersistence.adapter.loadSubset( + collectionId, + {}, + ) + return jsonResponse(200, { + ok: true, + rows: rows.map((row) => ({ + key: row.key, + value: row.value, + })), + }) + } + await this.collectionReady + const rows = this.collection.toArray.map((todo) => ({ + key: todo.id, + value: todo, + })) + return jsonResponse(200, { + ok: true, + rows, + }) + } + + if ( + request.method === `POST` && + url.pathname === `/load-unknown-collection-error` + ) { + const unknownCollectionId = requestBody.collectionId ?? `missing` + if (unknownCollectionId !== this.collectionId) { + throw createUnknownCollectionError(unknownCollectionId) + } + const rows = await this.persistence.adapter.loadSubset( + unknownCollectionId, + {}, + ) + return jsonResponse(200, { + ok: true, + rows, + }) + } + + return jsonResponse(404, { + ok: false, + error: { + name: `NotFound`, + message: `Unknown durable object endpoint "${url.pathname}"`, + code: `NOT_FOUND`, + }, + }) + } catch (error) { + return jsonResponse(500, { + ok: false, + error: serializeError(error), + }) + } + } +} + +export default { + async fetch(request, env) { + const url = new URL(request.url) + if (url.pathname === `/health`) { + return jsonResponse(200, { + ok: true, + }) + } + + const id = env.PERSISTENCE.idFromName(`default`) + const stub = env.PERSISTENCE.get(id) + return stub.fetch(request) + }, +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/fixtures/wrangler.toml b/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/fixtures/wrangler.toml new file mode 100644 index 000000000..ad9fabad1 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/e2e/fixtures/wrangler.toml @@ -0,0 +1,11 @@ +name = "tanstack-db-cloudflare-do-e2e" +main = "./worker.mjs" +compatibility_date = "2026-02-11" + +[[durable_objects.bindings]] +name = "PERSISTENCE" +class_name = "PersistenceObject" + +[[migrations]] +tag = "v1" +new_sqlite_classes = ["PersistenceObject"] diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/package.json b/packages/db-cloudflare-do-sqlite-persisted-collection/package.json new file mode 100644 index 000000000..44ed62eda --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/package.json @@ -0,0 +1,61 @@ +{ + "name": "@tanstack/db-cloudflare-do-sqlite-persisted-collection", + "version": "0.1.0", + "description": "Cloudflare Durable Object SQLite persisted collection adapter for TanStack DB", + "author": "TanStack Team", + "license": "MIT", + "repository": { + "type": "git", + "url": "git+https://github.com/TanStack/db.git", + "directory": "packages/db-cloudflare-do-sqlite-persisted-collection" + }, + "homepage": "https://tanstack.com/db", + "keywords": [ + "sqlite", + "cloudflare", + "durable-objects", + "persistence", + "typescript" + ], + "scripts": { + "build": "vite build", + "dev": "vite build --watch", + "lint": "eslint . --fix", + "test": "vitest --run", + "test:e2e": "pnpm --filter @tanstack/db-ivm build && pnpm --filter @tanstack/db build && pnpm --filter @tanstack/db-sqlite-persisted-collection-core build && pnpm --filter @tanstack/db-cloudflare-do-sqlite-persisted-collection build && vitest --config vitest.e2e.config.ts --run" + }, + "type": "module", + "main": "dist/cjs/index.cjs", + "module": "dist/esm/index.js", + "types": "dist/esm/index.d.ts", + "exports": { + ".": { + "import": { + "types": "./dist/esm/index.d.ts", + "default": "./dist/esm/index.js" + }, + "require": { + "types": "./dist/cjs/index.d.cts", + "default": "./dist/cjs/index.cjs" + } + }, + "./package.json": "./package.json" + }, + "sideEffects": false, + "files": [ + "dist", + "src" + ], + "dependencies": { + "@tanstack/db-sqlite-persisted-collection-core": "workspace:*" + }, + "peerDependencies": { + "typescript": ">=4.7" + }, + "devDependencies": { + "@types/better-sqlite3": "^7.6.13", + "@vitest/coverage-istanbul": "^3.2.4", + "better-sqlite3": "^12.6.2", + "wrangler": "^4.64.0" + } +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/src/do-driver.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/src/do-driver.ts new file mode 100644 index 000000000..4cd98bdf9 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/src/do-driver.ts @@ -0,0 +1,263 @@ +import { InvalidPersistedCollectionConfigError } from '@tanstack/db-sqlite-persisted-collection-core' +import type { SQLiteDriver } from '@tanstack/db-sqlite-persisted-collection-core' + +type DurableObjectSqlRow = Record + +type DurableObjectSqlCursorLike = Iterable & { + toArray?: () => Array +} + +export type DurableObjectSqlStorageLike = { + exec: ( + sql: string, + ...params: ReadonlyArray + ) => DurableObjectSqlCursorLike | ReadonlyArray | null +} + +export type DurableObjectTransactionExecutor = ( + fn: () => Promise, +) => Promise + +export type DurableObjectStorageLike = { + sql: DurableObjectSqlStorageLike + transaction?: DurableObjectTransactionExecutor +} + +type CloudflareDOProvidedSqlOptions = { + sql: DurableObjectSqlStorageLike + transaction?: DurableObjectTransactionExecutor +} + +type CloudflareDOProvidedStorageOptions = { + storage: DurableObjectStorageLike +} + +export type CloudflareDOSQLiteDriverOptions = + | CloudflareDOProvidedSqlOptions + | CloudflareDOProvidedStorageOptions + +function assertTransactionCallbackHasDriverArg( + fn: (transactionDriver: SQLiteDriver) => Promise, +): void { + if (fn.length > 0) { + return + } + + throw new InvalidPersistedCollectionConfigError( + `SQLiteDriver.transaction callback must accept the transaction driver argument`, + ) +} + +function isIterableRecord( + value: unknown, +): value is Iterable { + if (!value || typeof value !== `object`) { + return false + } + + const iterator = (value as { [Symbol.iterator]?: unknown })[Symbol.iterator] + return typeof iterator === `function` +} + +function toRowArray( + result: ReturnType, + sql: string, +): ReadonlyArray { + if (result == null) { + return [] + } + + if (Array.isArray(result)) { + return result as ReadonlyArray + } + + const cursorResult = result as DurableObjectSqlCursorLike + if (typeof cursorResult.toArray === `function`) { + return cursorResult.toArray() as ReadonlyArray + } + + if (isIterableRecord(cursorResult)) { + return Array.from(cursorResult as Iterable) + } + + throw new InvalidPersistedCollectionConfigError( + `Unsupported Durable Object SQL result shape for query "${sql}"`, + ) +} + +export class CloudflareDOSQLiteDriver implements SQLiteDriver { + private readonly sqlStorage: DurableObjectSqlStorageLike + private readonly storage: DurableObjectStorageLike + private readonly transactionExecutor: DurableObjectTransactionExecutor | null + private queue: Promise = Promise.resolve() + private nextSavepointId = 1 + + constructor(options: CloudflareDOSQLiteDriverOptions) { + const resolvedStorage: DurableObjectStorageLike = + `storage` in options + ? options.storage + : { + sql: options.sql, + ...(typeof options.transaction === `function` + ? { transaction: options.transaction } + : {}), + } + const resolvedSqlStorage = resolvedStorage.sql + if (typeof resolvedSqlStorage.exec !== `function`) { + throw new InvalidPersistedCollectionConfigError( + `Cloudflare DO SQL driver requires a sql.exec function`, + ) + } + this.storage = resolvedStorage + this.sqlStorage = resolvedSqlStorage + if (typeof resolvedStorage.transaction === `function`) { + const transactionMethod = resolvedStorage.transaction + this.transactionExecutor = (fn: () => Promise) => + Promise.resolve( + transactionMethod.call(resolvedStorage, fn) as Promise | T, + ) + } else { + this.transactionExecutor = null + } + } + + async exec(sql: string): Promise { + await this.enqueue(() => { + this.execute(sql) + }) + } + + async query( + sql: string, + params: ReadonlyArray = [], + ): Promise> { + return this.enqueue(() => this.executeQuery(sql, params)) + } + + async run(sql: string, params: ReadonlyArray = []): Promise { + await this.enqueue(() => { + this.execute(sql, params) + }) + } + + async transaction( + fn: (transactionDriver: SQLiteDriver) => Promise, + ): Promise { + assertTransactionCallbackHasDriverArg(fn) + + return this.enqueue(async () => { + const transactionDriver = this.createTransactionDriver() + if (this.transactionExecutor) { + return this.transactionExecutor(() => fn(transactionDriver)) + } + + this.execute(`BEGIN IMMEDIATE`) + try { + const result = await fn(transactionDriver) + this.execute(`COMMIT`) + return result + } catch (error) { + try { + this.execute(`ROLLBACK`) + } catch { + // Keep the original transaction error as the primary failure. + } + throw error + } + }) + } + + async transactionWithDriver( + fn: (transactionDriver: SQLiteDriver) => Promise, + ): Promise { + return this.transaction(fn) + } + + getStorage(): DurableObjectStorageLike { + return this.storage + } + + private execute(sql: string, params: ReadonlyArray = []): unknown { + return this.sqlStorage.exec(sql, ...params) + } + + private executeQuery( + sql: string, + params: ReadonlyArray, + ): ReadonlyArray { + const result = this.execute(sql, params) + return toRowArray( + result as ReturnType, + sql, + ) + } + + private enqueue(operation: () => Promise | T): Promise { + const queuedOperation = this.queue.then(operation, operation) + this.queue = queuedOperation.then( + () => undefined, + () => undefined, + ) + return queuedOperation + } + + private createTransactionDriver(): SQLiteDriver { + const transactionDriver: SQLiteDriver = { + exec: (sql) => { + this.execute(sql) + return Promise.resolve() + }, + query: ( + sql: string, + params: ReadonlyArray = [], + ): Promise> => + Promise.resolve(this.executeQuery(sql, params)), + run: (sql, params = []) => { + this.execute(sql, params) + return Promise.resolve() + }, + transaction: ( + fn: (nestedDriver: SQLiteDriver) => Promise, + ): Promise => { + assertTransactionCallbackHasDriverArg(fn) + return this.runNestedTransaction(transactionDriver, fn) + }, + transactionWithDriver: ( + fn: (nestedDriver: SQLiteDriver) => Promise, + ): Promise => this.runNestedTransaction(transactionDriver, fn), + } + + return transactionDriver + } + + private async runNestedTransaction( + transactionDriver: SQLiteDriver, + fn: (nestedDriver: SQLiteDriver) => Promise, + ): Promise { + if (this.transactionExecutor) { + throw new InvalidPersistedCollectionConfigError( + `Nested SQL savepoints are not supported when using Durable Object transaction API`, + ) + } + + const savepointName = `tsdb_sp_${this.nextSavepointId}` + this.nextSavepointId++ + this.execute(`SAVEPOINT ${savepointName}`) + + try { + const result = await fn(transactionDriver) + this.execute(`RELEASE SAVEPOINT ${savepointName}`) + return result + } catch (error) { + this.execute(`ROLLBACK TO SAVEPOINT ${savepointName}`) + this.execute(`RELEASE SAVEPOINT ${savepointName}`) + throw error + } + } +} + +export function createCloudflareDOSQLiteDriver( + options: CloudflareDOSQLiteDriverOptions, +): CloudflareDOSQLiteDriver { + return new CloudflareDOSQLiteDriver(options) +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/src/do-persistence.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/src/do-persistence.ts new file mode 100644 index 000000000..d33302847 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/src/do-persistence.ts @@ -0,0 +1,166 @@ +import { + SingleProcessCoordinator, + createSQLiteCorePersistenceAdapter, +} from '@tanstack/db-sqlite-persisted-collection-core' +import { CloudflareDOSQLiteDriver } from './do-driver' +import type { + PersistedCollectionCoordinator, + PersistedCollectionMode, + PersistedCollectionPersistence, + SQLiteCoreAdapterOptions, + SQLiteDriver, +} from '@tanstack/db-sqlite-persisted-collection-core' +import type { DurableObjectStorageLike } from './do-driver' + +export type { DurableObjectStorageLike } from './do-driver' + +type CloudflareDOCoreSchemaMismatchPolicy = + | `sync-present-reset` + | `sync-absent-error` + | `reset` + +export type CloudflareDOSchemaMismatchPolicy = + | CloudflareDOCoreSchemaMismatchPolicy + | `throw` + +type CloudflareDOSQLitePersistenceBaseOptions = Omit< + SQLiteCoreAdapterOptions, + `driver` | `schemaVersion` | `schemaMismatchPolicy` +> & { + storage: DurableObjectStorageLike + coordinator?: PersistedCollectionCoordinator + schemaMismatchPolicy?: CloudflareDOSchemaMismatchPolicy +} + +export type CloudflareDOSQLitePersistenceOptions = + CloudflareDOSQLitePersistenceBaseOptions + +function normalizeSchemaMismatchPolicy( + policy: CloudflareDOSchemaMismatchPolicy, +): CloudflareDOCoreSchemaMismatchPolicy { + if (policy === `throw`) { + return `sync-absent-error` + } + return policy +} + +function resolveSchemaMismatchPolicy( + explicitPolicy: CloudflareDOSchemaMismatchPolicy | undefined, + mode: PersistedCollectionMode, +): CloudflareDOCoreSchemaMismatchPolicy { + if (explicitPolicy) { + return normalizeSchemaMismatchPolicy(explicitPolicy) + } + + return mode === `sync-present` ? `sync-present-reset` : `sync-absent-error` +} + +function createAdapterCacheKey( + schemaMismatchPolicy: CloudflareDOCoreSchemaMismatchPolicy, + schemaVersion: number | undefined, +): string { + const schemaVersionKey = + schemaVersion === undefined ? `schema:default` : `schema:${schemaVersion}` + return `${schemaMismatchPolicy}|${schemaVersionKey}` +} + +function resolveSQLiteDriver( + options: CloudflareDOSQLitePersistenceOptions, +): SQLiteDriver { + return new CloudflareDOSQLiteDriver({ + storage: options.storage, + }) +} + +function resolveAdapterBaseOptions( + options: CloudflareDOSQLitePersistenceOptions, +): Omit< + SQLiteCoreAdapterOptions, + `driver` | `schemaVersion` | `schemaMismatchPolicy` +> { + return { + appliedTxPruneMaxRows: options.appliedTxPruneMaxRows, + appliedTxPruneMaxAgeSeconds: options.appliedTxPruneMaxAgeSeconds, + pullSinceReloadThreshold: options.pullSinceReloadThreshold, + } +} + +/** + * Creates a shared Durable Object SQLite persistence instance that can be reused + * by many collections in a single Durable Object storage. + */ +export function createCloudflareDOSQLitePersistence< + T extends object, + TKey extends string | number = string | number, +>( + options: CloudflareDOSQLitePersistenceOptions, +): PersistedCollectionPersistence { + const { coordinator, schemaMismatchPolicy } = options + const driver = resolveSQLiteDriver(options) + const adapterBaseOptions = resolveAdapterBaseOptions(options) + const resolvedCoordinator = coordinator ?? new SingleProcessCoordinator() + const adapterCache = new Map< + string, + ReturnType< + typeof createSQLiteCorePersistenceAdapter< + Record, + string | number + > + > + >() + + const getAdapterForCollection = ( + mode: PersistedCollectionMode, + schemaVersion: number | undefined, + ) => { + const resolvedSchemaMismatchPolicy = resolveSchemaMismatchPolicy( + schemaMismatchPolicy, + mode, + ) + const cacheKey = createAdapterCacheKey( + resolvedSchemaMismatchPolicy, + schemaVersion, + ) + const cachedAdapter = adapterCache.get(cacheKey) + if (cachedAdapter) { + return cachedAdapter + } + + const adapter = createSQLiteCorePersistenceAdapter< + Record, + string | number + >({ + ...adapterBaseOptions, + driver, + schemaMismatchPolicy: resolvedSchemaMismatchPolicy, + ...(schemaVersion === undefined ? {} : { schemaVersion }), + }) + adapterCache.set(cacheKey, adapter) + return adapter + } + + const createCollectionPersistence = ( + mode: PersistedCollectionMode, + schemaVersion: number | undefined, + ): PersistedCollectionPersistence => ({ + adapter: getAdapterForCollection( + mode, + schemaVersion, + ) as unknown as PersistedCollectionPersistence[`adapter`], + coordinator: resolvedCoordinator, + }) + + const defaultPersistence = createCollectionPersistence( + `sync-absent`, + undefined, + ) + + return { + ...defaultPersistence, + resolvePersistenceForCollection: ({ mode, schemaVersion }) => + createCollectionPersistence(mode, schemaVersion), + // Backward compatible fallback for older callers. + resolvePersistenceForMode: (mode) => + createCollectionPersistence(mode, undefined), + } +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/src/index.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/src/index.ts new file mode 100644 index 000000000..b7c212b0c --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/src/index.ts @@ -0,0 +1,11 @@ +export { createCloudflareDOSQLitePersistence } from './do-persistence' +export type { + CloudflareDOSchemaMismatchPolicy, + CloudflareDOSQLitePersistenceOptions, + DurableObjectStorageLike, +} from './do-persistence' +export { persistedCollectionOptions } from '@tanstack/db-sqlite-persisted-collection-core' +export type { + PersistedCollectionCoordinator, + PersistedCollectionPersistence, +} from '@tanstack/db-sqlite-persisted-collection-core' diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-driver.test.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-driver.test.ts new file mode 100644 index 000000000..e5c47f4f0 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-driver.test.ts @@ -0,0 +1,100 @@ +import { mkdtempSync, rmSync } from 'node:fs' +import { tmpdir } from 'node:os' +import { join } from 'node:path' +import { describe, expect, it } from 'vitest' +import { runSQLiteDriverContractSuite } from '../../db-sqlite-persisted-collection-core/tests/contracts/sqlite-driver-contract' +import { CloudflareDOSQLiteDriver } from '../src/do-driver' +import { InvalidPersistedCollectionConfigError } from '../../db-sqlite-persisted-collection-core/src' +import { createBetterSqliteDoStorageHarness } from './helpers/better-sqlite-do-storage' +import type { SQLiteDriverContractHarness } from '../../db-sqlite-persisted-collection-core/tests/contracts/sqlite-driver-contract' + +function createDriverHarness(): SQLiteDriverContractHarness { + const tempDirectory = mkdtempSync(join(tmpdir(), `db-cf-do-driver-`)) + const dbPath = join(tempDirectory, `state.sqlite`) + const storageHarness = createBetterSqliteDoStorageHarness({ + filename: dbPath, + }) + const driver = new CloudflareDOSQLiteDriver({ + storage: storageHarness.storage, + }) + + return { + driver, + cleanup: () => { + try { + storageHarness.close() + } finally { + rmSync(tempDirectory, { recursive: true, force: true }) + } + }, + } +} + +runSQLiteDriverContractSuite( + `cloudflare durable object sqlite driver`, + createDriverHarness, +) + +describe(`cloudflare durable object sqlite driver (native transaction mode)`, () => { + it(`uses storage.transaction when available`, async () => { + const executedSql = new Array() + let transactionCalls = 0 + const driver = new CloudflareDOSQLiteDriver({ + storage: { + sql: { + exec: (sql) => { + executedSql.push(sql) + if (sql.startsWith(`SELECT`)) { + return [{ value: 1 }] + } + return [] + }, + }, + transaction: async (fn) => { + transactionCalls++ + return fn() + }, + }, + }) + + await driver.transaction(async (transactionDriver) => { + await transactionDriver.run(`INSERT INTO todos (id) VALUES (?)`, [`1`]) + const rows = await transactionDriver.query<{ value: number }>( + `SELECT 1 AS value`, + ) + expect(rows).toEqual([{ value: 1 }]) + }) + + expect(transactionCalls).toBe(1) + expect(executedSql).toContain(`INSERT INTO todos (id) VALUES (?)`) + expect(executedSql).not.toContain(`BEGIN IMMEDIATE`) + expect(executedSql).not.toContain(`COMMIT`) + }) + + it(`throws a clear error for nested transactions in native transaction mode`, async () => { + const driver = new CloudflareDOSQLiteDriver({ + storage: { + sql: { + exec: () => [], + }, + transaction: async (fn) => fn(), + }, + }) + + await expect( + driver.transaction(async (transactionDriver) => + transactionDriver.transaction((_nestedDriver) => + Promise.resolve(undefined), + ), + ), + ).rejects.toBeInstanceOf(InvalidPersistedCollectionConfigError) + + await expect( + driver.transaction(async (transactionDriver) => + transactionDriver.transaction((_nestedDriver) => + Promise.resolve(undefined), + ), + ), + ).rejects.toThrow(`Nested SQL savepoints are not supported`) + }) +}) diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-persistence.test.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-persistence.test.ts new file mode 100644 index 000000000..2f2f58712 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-persistence.test.ts @@ -0,0 +1,182 @@ +import { mkdtempSync, rmSync } from 'node:fs' +import { tmpdir } from 'node:os' +import { join } from 'node:path' +import { describe, expect, it } from 'vitest' +import { + createCloudflareDOSQLitePersistence, + persistedCollectionOptions, +} from '../src' +import { CloudflareDOSQLiteDriver } from '../src/do-driver' +import { SingleProcessCoordinator } from '../../db-sqlite-persisted-collection-core/src' +import { runRuntimePersistenceContractSuite } from '../../db-sqlite-persisted-collection-core/tests/contracts/runtime-persistence-contract' +import { createBetterSqliteDoStorageHarness } from './helpers/better-sqlite-do-storage' +import type { + RuntimePersistenceContractTodo, + RuntimePersistenceDatabaseHarness, +} from '../../db-sqlite-persisted-collection-core/tests/contracts/runtime-persistence-contract' + +function createRuntimeDatabaseHarness(): RuntimePersistenceDatabaseHarness { + const tempDirectory = mkdtempSync(join(tmpdir(), `db-cf-do-persistence-`)) + const dbPath = join(tempDirectory, `state.sqlite`) + const activeStorageHarnesses = new Set< + ReturnType + >() + + return { + createDriver: () => { + const storageHarness = createBetterSqliteDoStorageHarness({ + filename: dbPath, + }) + activeStorageHarnesses.add(storageHarness) + return new CloudflareDOSQLiteDriver({ + storage: storageHarness.storage, + }) + }, + cleanup: () => { + for (const storageHarness of activeStorageHarnesses) { + try { + storageHarness.close() + } catch { + // ignore cleanup errors from already-closed handles + } + } + activeStorageHarnesses.clear() + rmSync(tempDirectory, { recursive: true, force: true }) + }, + } +} + +runRuntimePersistenceContractSuite( + `cloudflare durable object runtime helpers`, + { + createDatabaseHarness: createRuntimeDatabaseHarness, + createAdapter: (driver) => + createCloudflareDOSQLitePersistence< + RuntimePersistenceContractTodo, + string + >({ + storage: (driver as CloudflareDOSQLiteDriver).getStorage(), + }).adapter, + createPersistence: (driver, coordinator) => + createCloudflareDOSQLitePersistence< + RuntimePersistenceContractTodo, + string + >({ + storage: (driver as CloudflareDOSQLiteDriver).getStorage(), + coordinator, + }), + createCoordinator: () => new SingleProcessCoordinator(), + }, +) + +describe(`cloudflare durable object persistence helpers`, () => { + it(`defaults coordinator to SingleProcessCoordinator`, () => { + const runtimeHarness = createRuntimeDatabaseHarness() + const driver = runtimeHarness.createDriver() + + try { + const persistence = createCloudflareDOSQLitePersistence({ + storage: (driver as CloudflareDOSQLiteDriver).getStorage(), + }) + expect(persistence.coordinator).toBeInstanceOf(SingleProcessCoordinator) + } finally { + runtimeHarness.cleanup() + } + }) + + it(`infers mode from sync presence and keeps schema per collection`, async () => { + const tempDirectory = mkdtempSync(join(tmpdir(), `db-cf-do-schema-infer-`)) + const dbPath = join(tempDirectory, `state.sqlite`) + const collectionId = `todos` + const firstStorageHarness = createBetterSqliteDoStorageHarness({ + filename: dbPath, + }) + const firstPersistence = createCloudflareDOSQLitePersistence< + RuntimePersistenceContractTodo, + string + >({ + storage: firstStorageHarness.storage, + }) + + try { + const firstCollectionOptions = persistedCollectionOptions< + RuntimePersistenceContractTodo, + string + >({ + id: collectionId, + schemaVersion: 1, + getKey: (todo) => todo.id, + persistence: firstPersistence, + }) + await firstCollectionOptions.persistence.adapter.applyCommittedTx( + collectionId, + { + txId: `tx-1`, + term: 1, + seq: 1, + rowVersion: 1, + mutations: [ + { + type: `insert`, + key: `1`, + value: { + id: `1`, + title: `before mismatch`, + score: 1, + }, + }, + ], + }, + ) + } finally { + firstStorageHarness.close() + } + + const secondStorageHarness = createBetterSqliteDoStorageHarness({ + filename: dbPath, + }) + const secondPersistence = createCloudflareDOSQLitePersistence< + RuntimePersistenceContractTodo, + string + >({ + storage: secondStorageHarness.storage, + }) + try { + const syncAbsentOptions = persistedCollectionOptions< + RuntimePersistenceContractTodo, + string + >({ + id: collectionId, + schemaVersion: 2, + getKey: (todo) => todo.id, + persistence: secondPersistence, + }) + await expect( + syncAbsentOptions.persistence.adapter.loadSubset(collectionId, {}), + ).rejects.toThrow(`Schema version mismatch`) + + const syncPresentOptions = persistedCollectionOptions< + RuntimePersistenceContractTodo, + string + >({ + id: collectionId, + schemaVersion: 2, + getKey: (todo) => todo.id, + sync: { + sync: ({ markReady }) => { + markReady() + }, + }, + persistence: secondPersistence, + }) + const rows = await syncPresentOptions.persistence.adapter.loadSubset( + collectionId, + {}, + ) + expect(rows).toEqual([]) + } finally { + secondStorageHarness.close() + rmSync(tempDirectory, { recursive: true, force: true }) + } + }) +}) diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-sqlite-core-adapter-contract.test.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-sqlite-core-adapter-contract.test.ts new file mode 100644 index 000000000..2e0c866e8 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/do-sqlite-core-adapter-contract.test.ts @@ -0,0 +1,46 @@ +import { mkdtempSync, rmSync } from 'node:fs' +import { tmpdir } from 'node:os' +import { join } from 'node:path' +import { runSQLiteCoreAdapterContractSuite } from '../../db-sqlite-persisted-collection-core/tests/contracts/sqlite-core-adapter-contract' +import { CloudflareDOSQLiteDriver } from '../src/do-driver' +import { SQLiteCorePersistenceAdapter } from '../../db-sqlite-persisted-collection-core/src' +import { createBetterSqliteDoStorageHarness } from './helpers/better-sqlite-do-storage' +import type { + SQLiteCoreAdapterContractTodo, + SQLiteCoreAdapterHarnessFactory, +} from '../../db-sqlite-persisted-collection-core/tests/contracts/sqlite-core-adapter-contract' + +const createHarness: SQLiteCoreAdapterHarnessFactory = (options) => { + const tempDirectory = mkdtempSync(join(tmpdir(), `db-cf-do-sql-core-`)) + const dbPath = join(tempDirectory, `state.sqlite`) + const storageHarness = createBetterSqliteDoStorageHarness({ + filename: dbPath, + }) + const driver = new CloudflareDOSQLiteDriver({ + storage: storageHarness.storage, + }) + const adapter = new SQLiteCorePersistenceAdapter< + SQLiteCoreAdapterContractTodo, + string + >({ + driver, + ...options, + }) + + return { + adapter, + driver, + cleanup: () => { + try { + storageHarness.close() + } finally { + rmSync(tempDirectory, { recursive: true, force: true }) + } + }, + } +} + +runSQLiteCoreAdapterContractSuite( + `SQLiteCorePersistenceAdapter (cloudflare do sqlite driver harness)`, + createHarness, +) diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/tests/helpers/better-sqlite-do-storage.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/helpers/better-sqlite-do-storage.ts new file mode 100644 index 000000000..c0d5e354c --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/tests/helpers/better-sqlite-do-storage.ts @@ -0,0 +1,63 @@ +import BetterSqlite3 from 'better-sqlite3' +import type { + DurableObjectSqlStorageLike, + DurableObjectStorageLike, +} from '../../src/do-driver' + +type BetterSqliteDoStorageHarness = { + sql: DurableObjectSqlStorageLike + storage: DurableObjectStorageLike + close: () => void +} + +type BetterSqliteStatement = ReturnType + +function readRows( + statement: BetterSqliteStatement, + params: ReadonlyArray, +) { + const statementWithVariadicIterate = statement as BetterSqliteStatement & { + iterate: (...params: ReadonlyArray) => Iterable + } + return statementWithVariadicIterate.iterate(...params) as Iterable< + Record + > +} + +function runStatement( + statement: BetterSqliteStatement, + params: ReadonlyArray, +): void { + const statementWithVariadicRun = statement as BetterSqliteStatement & { + run: (...params: ReadonlyArray) => unknown + } + statementWithVariadicRun.run(...params) +} + +export function createBetterSqliteDoStorageHarness(options: { + filename: string +}): BetterSqliteDoStorageHarness { + const database = new BetterSqlite3(options.filename) + + const sql: DurableObjectSqlStorageLike = { + exec: (sqlText, ...params) => { + const statement = database.prepare(sqlText) + if (statement.reader) { + return readRows(statement, params) + } + runStatement(statement, params) + return [] + }, + } + const storage: DurableObjectStorageLike = { + sql, + } + + return { + sql, + storage, + close: () => { + database.close() + }, + } +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/tsconfig.docs.json b/packages/db-cloudflare-do-sqlite-persisted-collection/tsconfig.docs.json new file mode 100644 index 000000000..5fddb4598 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/tsconfig.docs.json @@ -0,0 +1,12 @@ +{ + "extends": "./tsconfig.json", + "compilerOptions": { + "paths": { + "@tanstack/db": ["../db/src"], + "@tanstack/db-sqlite-persisted-collection-core": [ + "../db-sqlite-persisted-collection-core/src" + ] + } + }, + "include": ["src"] +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/tsconfig.json b/packages/db-cloudflare-do-sqlite-persisted-collection/tsconfig.json new file mode 100644 index 000000000..97ec70305 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/tsconfig.json @@ -0,0 +1,30 @@ +{ + "extends": "../../tsconfig.json", + "compilerOptions": { + "target": "ES2020", + "module": "ESNext", + "moduleResolution": "Bundler", + "declaration": true, + "outDir": "dist", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true, + "jsx": "react", + "paths": { + "@tanstack/db": ["../db/src"], + "@tanstack/db-ivm": ["../db-ivm/src"], + "@tanstack/db-sqlite-persisted-collection-core": [ + "../db-sqlite-persisted-collection-core/src" + ] + } + }, + "include": [ + "src", + "tests", + "e2e/**/*.e2e.test.ts", + "vite.config.ts", + "vitest.e2e.config.ts" + ], + "exclude": ["node_modules", "dist"] +} diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/vite.config.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/vite.config.ts new file mode 100644 index 000000000..ea27c667a --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/vite.config.ts @@ -0,0 +1,24 @@ +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/vite-config' +import packageJson from './package.json' + +const config = defineConfig({ + test: { + name: packageJson.name, + include: [`tests/**/*.test.ts`], + environment: `node`, + coverage: { enabled: true, provider: `istanbul`, include: [`src/**/*`] }, + typecheck: { + enabled: true, + include: [`tests/**/*.test.ts`, `tests/**/*.test-d.ts`], + }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: `./src/index.ts`, + srcDir: `./src`, + }), +) diff --git a/packages/db-cloudflare-do-sqlite-persisted-collection/vitest.e2e.config.ts b/packages/db-cloudflare-do-sqlite-persisted-collection/vitest.e2e.config.ts new file mode 100644 index 000000000..b17779a39 --- /dev/null +++ b/packages/db-cloudflare-do-sqlite-persisted-collection/vitest.e2e.config.ts @@ -0,0 +1,31 @@ +import { dirname, resolve } from 'node:path' +import { fileURLToPath } from 'node:url' +import { defineConfig } from 'vitest/config' + +const packageDirectory = dirname(fileURLToPath(import.meta.url)) + +export default defineConfig({ + resolve: { + alias: { + '@tanstack/db': resolve(packageDirectory, `../db/src`), + '@tanstack/db-ivm': resolve(packageDirectory, `../db-ivm/src`), + '@tanstack/db-sqlite-persisted-collection-core': resolve( + packageDirectory, + `../db-sqlite-persisted-collection-core/src`, + ), + }, + }, + test: { + include: [`e2e/**/*.e2e.test.ts`], + fileParallelism: false, + testTimeout: 90_000, + hookTimeout: 120_000, + environment: `node`, + typecheck: { + enabled: false, + }, + coverage: { + enabled: false, + }, + }, +}) diff --git a/packages/db-sqlite-persisted-collection-core/tests/contracts/runtime-bridge-e2e-contract.ts b/packages/db-sqlite-persisted-collection-core/tests/contracts/runtime-bridge-e2e-contract.ts index 38f9668fc..b5007a5e9 100644 --- a/packages/db-sqlite-persisted-collection-core/tests/contracts/runtime-bridge-e2e-contract.ts +++ b/packages/db-sqlite-persisted-collection-core/tests/contracts/runtime-bridge-e2e-contract.ts @@ -65,11 +65,11 @@ export function runRuntimeBridgeE2EContractSuite( expect(rows).toEqual([ { key: `1`, - value: { + value: expect.objectContaining({ id: `1`, title: `From bridge client`, score: 10, - }, + }), }, ]) }) diff --git a/persistance-plan/README.md b/persistance-plan/README.md new file mode 100644 index 000000000..142792285 --- /dev/null +++ b/persistance-plan/README.md @@ -0,0 +1,49 @@ +# Persistence Plan - Phase Breakdown + +This folder contains the detailed execution plan for the SQLite-only persisted collection architecture. + +## Phase Files + +1. [Phase 0 - API + Runtime Feasibility](./phase-0-api-runtime-feasibility.md) +2. [Phase 1 - Index Lifecycle Events in `@tanstack/db`](./phase-1-index-lifecycle-events.md) +3. [Phase 2 - Core Persisted Wrapper](./phase-2-core-persisted-wrapper.md) +4. [Phase 3 - SQLite Core Adapter](./phase-3-sqlite-core-adapter.md) +5. [Phase 4 - Node + Electron](./phase-4-node-electron.md) +6. [Phase 5 - React Native + Expo](./phase-5-react-native-expo.md) +7. [Phase 6 - Cloudflare Durable Objects](./phase-6-cloudflare-durable-objects.md) +8. [Phase 7 - Browser Single-Tab (OPFS)](./phase-7-browser-single-tab.md) +9. [Phase 8 - Browser Multi-Tab Coordinator](./phase-8-browser-multi-tab.md) + +## Delivery Principles + +- SQLite-only persistence architecture across all runtimes. +- Collection-scoped leadership with DB-level write serialization. +- Local-first `loadSubset` behavior in both sync-present and sync-absent modes. +- One shared contract test suite across adapters. +- Browser multi-tab is intentionally the final rollout gate. + +## Suggested Milestone Gates + +- **Gate A (Core Semantics):** Phases 0-2 complete. +- **Gate B (Storage Correctness):** Phase 3 complete with contract tests green. +- **Gate C (Runtime Parity):** Phases 4-6 complete. +- **Gate D (Browser Readiness):** Phases 7-8 complete with integration tests. + +## Agent Guard Rails + +Use these rules when implementing any phase: + +1. No work is complete without tests in the same change. +2. Do not advance phases unless current-phase exit criteria and CI are green. +3. For query operators (`IN`, `AND`, `OR`, `LIKE`, date/datetime), always test: + - SQL pushdown path + - fallback filtering path +4. `IN` is mandatory for v1 incremental join loading: + - cover empty/single/large lists and SQLite parameter chunking +5. Date/datetime predicates require: + - canonical ISO-8601 UTC serialization + - timezone/offset boundary tests + - coverage for both lexical compare and SQLite date-function normalization paths +6. Any leadership/replay/mutation routing change must include failure-path tests. +7. Shared semantics must pass cross-runtime contract tests. +8. Schema mismatch and corruption behavior must be explicitly tested by mode. diff --git a/persistance-plan/phase-0-api-runtime-feasibility.md b/persistance-plan/phase-0-api-runtime-feasibility.md new file mode 100644 index 000000000..6e5ed96a7 --- /dev/null +++ b/persistance-plan/phase-0-api-runtime-feasibility.md @@ -0,0 +1,124 @@ +# Phase 0 - API + Runtime Feasibility + +## Objective + +Lock down the API surface, protocol shape, packaging boundaries, and runtime capability assumptions before implementation begins. + +## Why This Phase Exists + +The later phases depend on a stable contract for: + +- `persistedCollectionOptions(...)` mode inference +- coordinator protocol and required/optional methods +- runtime feature gates (browser, node, mobile, electron, DO) +- SQL key/identifier safety rules + +A weak Phase 0 creates churn in every downstream package. + +## Inputs + +- Root design doc (`/PERSISTNCE-PLAN-SQLITE-ONLY.md`) +- Current `@tanstack/db` collection API and sync API +- Existing runtime adapter patterns in monorepo + +## Scope + +1. Finalize TypeScript overloads for sync-present vs sync-absent mode. +2. Finalize runtime validation rules for invalid `sync` shape. +3. Finalize coordinator protocol envelope, payload types, and idempotency keys. +4. Finalize key encoding (`s:` / `n:` with `-0` handling) and safe identifier mapping strategy. +5. Freeze package boundaries and ownership. +6. Define staged rollout gates and kill-switch/fallback strategy. +7. Freeze the v1 pushdown operator matrix for `loadSubset` (`IN`, `AND`, `OR`, `LIKE`, and date/datetime predicates). + +## Out of Scope + +- Implementing storage adapter logic +- Implementing browser election +- Implementing runtime-specific packages + +## Detailed Workstreams + +### Workstream A - API and Type Inference + +- [ ] Draft final overload signatures for `persistedCollectionOptions`. +- [ ] Define `PersistedCollectionUtils` and where it appears in inferred return type. +- [ ] Document compile-time and runtime discrimination rules. +- [ ] Specify all runtime validation errors: + - `InvalidSyncConfigError` + - `PersistenceUnavailableError` + - `PersistenceSchemaVersionMismatchError` + +**Acceptance criteria** + +- Two minimal compile tests prove inference for both modes. +- Invalid `sync` shapes are unambiguous and deterministic. + +### Workstream B - Coordinator Contract and Protocol + +- [ ] Freeze required coordinator methods shared by all runtimes. +- [ ] Identify browser-only optional methods (`pullSince`, mutation RPC helpers). +- [ ] Finalize message envelope versioning (`v: 1`) and forward-compat guidance. +- [ ] Define timeout/retry semantics and defaults. +- [ ] Define idempotency correlation keys and persistence requirements. + +**Acceptance criteria** + +- Protocol type definitions reviewed and approved. +- Browser and single-process coordinators can both satisfy the interface. + +### Workstream C - Storage Safety Rules + +- [ ] Finalize canonical key encoding and decode edge cases. +- [ ] Finalize collectionId -> hashed table name mapping contract. +- [ ] Confirm no SQL identifier interpolation with raw user values. +- [ ] Finalize canonical JSON date/datetime serialization contract (ISO-8601 UTC string format). + +**Acceptance criteria** + +- Safety invariants are codified in testable helper contracts. + +### Workstream D - Packaging and Rollout + +- [ ] Confirm package list and scope ownership. +- [ ] Decide what lives in sqlite core vs runtime wrappers. +- [ ] Define phase gates and success metrics. +- [ ] Define fallback behavior by runtime when persistence capability is missing. +- [ ] Freeze pushdown behavior for v1 operators, including `IN` as mandatory for incremental join loading. + +**Acceptance criteria** + +- Package ownership is explicit (no overlap ambiguity). +- Rollout order is accepted by maintainers. +- v1 query-planning operator commitments are explicit and testable. + +## Deliverables + +1. Finalized API signature document (types + runtime rules). +2. Coordinator protocol spec (envelope, payloads, retries, idempotency). +3. Capability matrix by runtime. +4. Package boundary matrix (core vs wrappers). +5. Query-planning operator matrix and date serialization contract. +6. Phase gate checklist used by later phases. + +## Testing Plan + +- Type-level tests for overload inference. +- Runtime validation unit tests for invalid sync config. +- Protocol shape tests (serialization and discriminated unions). + +## Risks and Mitigations + +- **Risk:** ambiguous mode detection with optional `sync`. + - **Mitigation:** strict runtime guard: `sync` key present but invalid throws. +- **Risk:** coordinator contract too browser-specific. + - **Mitigation:** optionalize browser RPC methods and validate per runtime. +- **Risk:** package boundary drift. + - **Mitigation:** explicit ownership matrix checked in design review. + +## Exit Criteria + +- API and protocol types are frozen for Phases 1-3. +- Runtime capability assumptions are documented and approved. +- Package boundaries accepted by maintainers. +- No blocking unresolved decisions remain for implementation start. diff --git a/persistance-plan/phase-1-index-lifecycle-events.md b/persistance-plan/phase-1-index-lifecycle-events.md new file mode 100644 index 000000000..45552d6b0 --- /dev/null +++ b/persistance-plan/phase-1-index-lifecycle-events.md @@ -0,0 +1,88 @@ +# Phase 1 - Add Index Lifecycle Events to `@tanstack/db` + +## Objective + +Expose index lifecycle events in `@tanstack/db` so persistence can mirror index create/remove behavior consistently across tabs and runtimes. + +## Dependencies + +- Phase 0 protocol and signature finalization complete. +- Agreement on stable index signature strategy. + +## Scope + +1. Emit `index:added` and `index:removed` events. +2. Add index removal API (`removeIndex(...)`) to collection/index manager. +3. Ensure emitted payloads contain stable, serializable metadata. + +## Non-Goals + +- Building persisted SQLite indexes (Phase 3+) +- Browser tab synchronization behavior + +## Detailed Workstreams + +### Workstream A - Event Surface Design + +- [ ] Define event payload types for `index:added` and `index:removed`. +- [ ] Ensure payload includes fields needed to generate stable signature. +- [ ] Add versioning guidance if payload schema evolves. + +**Acceptance criteria** + +- Event payloads can be serialized and replayed. +- Payload includes enough data to build deterministic signature hash. + +### Workstream B - Index Manager Integration + +- [ ] Update `CollectionIndexesManager` to emit `index:added` after successful registration. +- [ ] Implement `removeIndex(...)` and emit `index:removed` on successful removal. +- [ ] Ensure idempotent behavior for duplicate remove calls. + +**Acceptance criteria** + +- Add/remove events fire exactly once per state transition. +- Removing unknown index is deterministic (documented behavior). + +### Workstream C - Backward Compatibility + +- [ ] Verify existing index consumers are not broken by new API. +- [ ] Add compatibility notes in changelog/docs. +- [ ] Confirm no behavior changes to query semantics. + +**Acceptance criteria** + +- Existing tests pass without relying on new events. +- New APIs are additive and non-breaking. + +## Deliverables + +1. Event types and public API changes in `@tanstack/db`. +2. `removeIndex(...)` implementation with tests. +3. Updated docs/examples for index lifecycle events. + +## Test Plan + +### Unit Tests + +- `createIndex` emits `index:added` with stable metadata. +- `removeIndex` emits `index:removed`. +- Duplicate remove handling is deterministic. + +### Integration Tests + +- Event ordering under rapid create/remove sequences. +- Auto-index interaction with lifecycle events. + +## Risks and Mitigations + +- **Risk:** unstable index metadata across tabs/processes. + - **Mitigation:** enforce canonical serialization before emitting. +- **Risk:** event emission before internal state update. + - **Mitigation:** emit only after successful state transition. + +## Exit Criteria + +- Lifecycle events are available and documented. +- `removeIndex(...)` is production-ready. +- Test coverage confirms stable metadata and event ordering. diff --git a/persistance-plan/phase-2-core-persisted-wrapper.md b/persistance-plan/phase-2-core-persisted-wrapper.md new file mode 100644 index 000000000..9808f54ae --- /dev/null +++ b/persistance-plan/phase-2-core-persisted-wrapper.md @@ -0,0 +1,130 @@ +# Phase 2 - Core Persisted Wrapper (Inferred Behavior) + +## Objective + +Implement `persistedCollectionOptions(...)` behavior for both runtime-inferred modes: + +- sync-present: persistence augments remote sync flow +- sync-absent: persistence is local source of truth with automatic mutation persistence + +## Dependencies + +- Phase 0 API/protocol finalized +- Phase 1 index lifecycle events available + +## Scope + +1. Implement inferred mode branching with runtime validation. +2. Implement hydrate barrier and ordered tx queueing. +3. Implement sync-present remote insert normalization (`insert` -> `update`). +4. Implement sync-absent mutation persistence wrappers. +5. Implement `utils.acceptMutations(transaction)` path. +6. Wire coordinator RPC stubs and fallbacks. +7. Implement seq-gap detection and recovery orchestration. + +## Non-Goals + +- SQLite SQL pushdown implementation (Phase 3) +- Browser leader election internals (Phase 8) + +## Detailed Workstreams + +### Workstream A - Wrapper Initialization and Validation + +- [ ] Implement mode selection based on presence of `sync` key. +- [ ] Throw `InvalidSyncConfigError` for invalid `sync` shapes. +- [ ] Default coordinator to `SingleProcessCoordinator` when omitted. +- [ ] Validate coordinator capabilities based on runtime mode. +- [ ] Bootstrap persisted index mirror from `collection.getIndexMetadata()` before listening to lifecycle events. + +**Acceptance criteria** + +- Runtime behavior matches compile-time discrimination. +- Validation errors are deterministic and tested. + +### Workstream B - Hydrate Barrier + Apply Queue + +- [ ] Add collection-scoped hydrate state (`isHydrating`, queued tx list). +- [ ] Ensure tx events received during hydrate are queued. +- [ ] Flush queued tx in strict order after hydrate completion. +- [ ] Ensure apply mutex serializes write/apply paths. +- [ ] Start index lifecycle listeners only after bootstrap snapshot is applied to avoid missing pre-sync indexes. + +**Acceptance criteria** + +- No lost updates during hydrate. +- Ordered replay across queued tx. + +### Workstream C - Sync-Present Semantics + +- [ ] Wrap `sync.sync(params)` and preserve existing semantics. +- [ ] Normalize remote insert payloads to update before write. +- [ ] Trigger leader remote ensure flow through coordinator request path. +- [ ] Maintain offline-first local load behavior. + +**Acceptance criteria** + +- Duplicate-key conflicts do not occur on overlapping cache/snapshot data. +- Offline `loadSubset` resolves from local persistence. + +### Workstream D - Sync-Absent Semantics + +- [ ] Wrap `onInsert/onUpdate/onDelete` to persist first, then confirm optimistic state. +- [ ] Implement mutation envelope construction with stable `mutationId`. +- [ ] Implement follower->leader mutation RPC path (coordinator capability gated). +- [ ] Implement `acceptMutations(transaction)` utility for manual transaction support. + +**Acceptance criteria** + +- All mutation entry points persist consistently. +- Mutation acknowledgments map to submitted ids. + +### Workstream E - Recovery and Invalidation + +- [ ] Detect seq gaps from `(term, seq)` stream. +- [ ] Trigger `pullSince(lastSeenRowVersion)` when possible. +- [ ] Support fallback stale-mark + subset reload when pull fails. +- [ ] Implement targeted invalidation threshold behavior. + +**Acceptance criteria** + +- Gap recovery path is deterministic and tested. +- Full-reload fallback keeps state correct. + +## Deliverables + +1. Core persisted wrapper implementation. +2. Mode-specific mutation behavior and utilities. +3. Hydrate barrier and queueing logic. +4. Recovery orchestration implementation. + +## Test Plan + +### Core Unit Tests + +- Inference validation and mode branching. +- Hydrate barrier queue and flush ordering. +- Sync-present insert-to-update normalization. +- Sync-absent auto-persist for insert/update/delete. +- Manual transaction persistence via `acceptMutations`. +- Seq-gap detection and pull fallback behavior. + +### In-Memory Integration Tests + +- Multi-node coordinator simulation for tx ordering. +- Mutation ack and rollback behavior under retries. + +## Risks and Mitigations + +- **Risk:** hidden race between hydrate and incoming tx. + - **Mitigation:** collection-scoped mutex and explicit queue flushing. +- **Risk:** divergent behavior between wrapped hooks and manual transactions. + - **Mitigation:** shared mutation envelope pipeline used by both paths. +- **Risk:** coordinator optional methods missing at runtime. + - **Mitigation:** upfront capability validation with clear errors. + +## Exit Criteria + +- Both inferred modes pass in-memory suites. +- Recovery paths are validated for success and failure branches. +- Public utilities and error semantics documented. diff --git a/persistance-plan/phase-3-sqlite-core-adapter.md b/persistance-plan/phase-3-sqlite-core-adapter.md new file mode 100644 index 000000000..bd8a51f12 --- /dev/null +++ b/persistance-plan/phase-3-sqlite-core-adapter.md @@ -0,0 +1,145 @@ +# Phase 3 - SQLite Core Adapter + +## Objective + +Deliver the runtime-agnostic SQLite adapter core that powers persisted collection reads/writes, index management, row-version catch-up, and schema policy handling. + +## Dependencies + +- Phase 2 wrapper behavior complete +- Stable index lifecycle metadata from Phase 1 + +## Scope + +1. Implement adapter operations: `loadSubset`, `applyCommittedTx`, `ensureIndex`. +2. Implement metadata schema initialization and evolution checks. +3. Implement partial update merge semantics. +4. Implement idempotency via `applied_tx`. +5. Implement row-version catch-up inputs and tombstone behavior. +6. Implement schema mismatch policies per mode. +7. Implement metadata pruning policies. + +## Non-Goals + +- Runtime-specific driver bindings beyond SQLiteDriver interface +- Browser/Web Locks behavior + +## Detailed Workstreams + +### Workstream A - DDL and Initialization + +- [ ] Create collection table and tombstone table mapping. +- [ ] Create metadata tables: + - `collection_registry` + - `persisted_index_registry` + - `applied_tx` + - `collection_version` + - `leader_term` + - `schema_version` + - `collection_reset_epoch` +- [ ] Add deterministic bootstrap order and migrationless checks. + +**Acceptance criteria** + +- Adapter can initialize clean DB from empty state. +- Re-initialization is idempotent. + +### Workstream B - Key and Identifier Safety + +- [ ] Implement `encodeStorageKey` / `decodeStorageKey` helpers. +- [ ] Handle `-0`, finite number checks, and string/number identity. +- [ ] Implement safe `collectionId` -> physical table name registry mapping. + +**Acceptance criteria** + +- No collisions between numeric and string keys. +- No unsafe identifier interpolation paths remain. + +### Workstream C - Transaction Apply Pipeline + +- [ ] Implement DB writer transaction logic for committed tx apply. +- [ ] Increment/read `collection_version.latest_row_version` per tx. +- [ ] Upsert rows and clear tombstones on upsert. +- [ ] Upsert tombstones on delete. +- [ ] Insert idempotency marker in `applied_tx`. + +**Acceptance criteria** + +- Replaying `(term, seq)` does not duplicate mutations. +- Row version is monotonic and shared across tx mutations. + +### Workstream D - Query Planning and Pushdown + +- [ ] Implement supported predicate pushdown (`eq`, `in`, `gt/gte/lt/lte`, `like`, `AND`, `OR`). +- [ ] Treat `IN` as required v1 functionality for incremental join loading paths. +- [ ] Handle `IN` edge cases (`[]`, single item, large lists with parameter batching). +- [ ] Implement date/datetime predicate compilation for JSON string fields. + - prefer canonical ISO-8601 UTC string comparisons when possible + - compile to `datetime(...)` / `strftime(...)` when normalization is required +- [ ] Implement `orderBy` alignment with index expressions. +- [ ] Implement fallback to superset + in-memory filter for unsupported fragments. + +**Acceptance criteria** + +- Query results match query-engine semantics. +- Incremental join loading paths using `IN` are fully pushdown-capable in v1. +- Unsupported expressions still return correct result after filtering. + +### Workstream E - Index Management + +- [ ] Compile persisted index spec to canonical SQL expression text. +- [ ] Implement `ensureIndex` with stable signature tracking. +- [ ] Track index state and usage timestamps in registry. +- [ ] Implement optional removal/mark-removed behavior. + +**Acceptance criteria** + +- Same logical index spec yields same signature and SQL. +- Repeated ensure calls are idempotent. + +### Workstream F - Schema Policy and Cleanup + +- [ ] Implement schema version checks per collection. +- [ ] Sync-present mismatch path: coordinated clear + reset epoch. +- [ ] Sync-absent mismatch path: throw (unless opt-in reset). +- [ ] Implement `applied_tx` pruning by seq/time policy. + +**Acceptance criteria** + +- Schema mismatch behavior follows design contract by mode. +- Pruning does not break pull/catch-up correctness. + +## Deliverables + +1. Shared SQLite core adapter implementation. +2. DDL bootstrap and metadata policy implementation. +3. Query pushdown + fallback logic. +4. Index registry and signature management. + +## Test Plan + +### Contract Test Matrix (Node runtime first) + +- `applyCommittedTx` correctness and idempotency. +- `loadSubset` correctness with/without index pushdown. +- Pushdown parity tests for `AND`/`OR`, `IN` (empty/single/large), `LIKE`, and date/datetime filters. +- Tombstone catch-up and key-level delta behavior. +- Schema version mismatch mode behavior. +- Key encoding round-trips and collision safety. +- Identifier safety for hostile collection ids. +- Pruning behavior and recovery correctness. + +## Risks and Mitigations + +- **Risk:** pushdown mismatch with query engine semantics. + - **Mitigation:** equivalence tests with randomized predicates. +- **Risk:** SQL busy/contention in concurrent runtimes. + - **Mitigation:** writer lock integration in upper coordinator layers plus retries. +- **Risk:** schema clear races with active reads. + - **Mitigation:** reset epoch and explicit collection reset handling. + +## Exit Criteria + +- Node-based adapter contract suite is green. +- Metadata/state invariants are validated under replay and recovery. +- Adapter is ready for runtime wrapper integration (Phases 4-8). diff --git a/persistance-plan/phase-4-node-electron.md b/persistance-plan/phase-4-node-electron.md new file mode 100644 index 000000000..deaabfda0 --- /dev/null +++ b/persistance-plan/phase-4-node-electron.md @@ -0,0 +1,87 @@ +# Phase 4 - Node + Electron + +## Objective + +Ship production-ready Node and Electron adapters on top of the shared SQLite core, ensuring behavioral parity and clear process boundaries. + +## Dependencies + +- Phase 3 adapter contract green in Node harness. +- Phase 2 wrapper semantics stable. + +## Scope + +1. Node package over `better-sqlite3` using shared `SQLiteDriver` adapter. +2. Electron package with main-process ownership and renderer IPC bridge. +3. Parity validation between Node and Electron behavior. + +## Non-Goals + +- Browser coordination or OPFS concerns +- Mobile runtime adaptation + +## Detailed Workstreams + +### Workstream A - Node Package + +- [ ] Implement `better-sqlite3` driver adapter with Promise-based interface. +- [ ] Expose `persistedCollectionOptions` wiring for node usage. +- [ ] Validate transaction and error semantics in sync + async wrappers. + +**Acceptance criteria** + +- Node package passes all shared adapter contract tests. +- API ergonomics match core expectations. + +### Workstream B - Electron Architecture + +- [ ] Define IPC API surface (renderer requests -> main execution). +- [ ] Keep SQLite and persistence execution in main process only. +- [ ] Implement request/response timeout and structured error transport. +- [ ] Ensure renderer cannot bypass main-process ownership. + +**Acceptance criteria** + +- Renderer operations function through IPC with no direct DB access. +- Error and timeout behavior are deterministic. + +### Workstream C - Parity and Reliability + +- [ ] Reuse Node adapter logic in Electron main process. +- [ ] Run shared contract suite against electron harness where supported. +- [ ] Add smoke tests for app lifecycle (start/restart/close). + +**Acceptance criteria** + +- Node and Electron behavior are equivalent for core flows. +- No Electron-specific correctness regressions. + +## Deliverables + +1. `@tanstack/db-node-sqlite-persisted-collection` +2. `@tanstack/db-electron-sqlite-persisted-collection` +3. Electron IPC bridge docs and example integration + +## Test Plan + +- Full adapter contract suite on Node. +- Electron integration tests: + - read/write round-trip through IPC + - process restart and persistence durability + - error propagation and timeout handling +- Regression tests for schema mismatch and reset flows. + +## Risks and Mitigations + +- **Risk:** IPC latency impacts hot-path operations. + - **Mitigation:** batch operations where possible and keep payloads compact. +- **Risk:** Electron renderer attempts direct file/db access. + - **Mitigation:** hard architecture rule: DB in main process only. +- **Risk:** subtle sync-vs-async wrapper mismatch. + - **Mitigation:** strict parity tests and adapter abstraction boundaries. + +## Exit Criteria + +- Node and Electron packages published with parity tests green. +- IPC boundary validated for correctness and reliability. +- Documentation includes integration guidance for app teams. diff --git a/persistance-plan/phase-5-react-native-expo.md b/persistance-plan/phase-5-react-native-expo.md new file mode 100644 index 000000000..b145b1813 --- /dev/null +++ b/persistance-plan/phase-5-react-native-expo.md @@ -0,0 +1,85 @@ +# Phase 5 - React Native + Expo + +## Objective + +Provide a unified mobile SQLite persistence package for both React Native and Expo using `op-sqlite`, with minimal platform divergence. + +## Dependencies + +- Phase 3 core adapter stable. +- Phase 2 wrapper semantics stable. + +## Scope + +1. Build shared mobile adapter package over `op-sqlite`. +2. Add RN/Expo-specific entrypoints only where host initialization differs. +3. Validate lifecycle, transaction, and persistence semantics on both hosts. + +## Non-Goals + +- Cross-process mobile coordination +- Browser multi-tab semantics + +## Detailed Workstreams + +### Workstream A - Shared Mobile Driver Layer + +- [ ] Implement `SQLiteDriver` wrapper around `op-sqlite`. +- [ ] Ensure consistent transaction boundaries and error mapping. +- [ ] Validate serialization/parsing paths for JSON payloads. + +**Acceptance criteria** + +- Same core adapter code runs unchanged on RN and Expo. +- Driver behavior matches node contract expectations. + +### Workstream B - Runtime Entrypoints + +- [ ] Provide RN entrypoint for bare/native setup. +- [ ] Provide Expo entrypoint for managed workflow setup. +- [ ] Keep API parity with node/browser wrappers where possible. + +**Acceptance criteria** + +- Consumers can swap runtimes with minimal app-level code change. + +### Workstream C - Mobile Lifecycle Hardening + +- [ ] Validate foreground/background transitions. +- [ ] Validate reopen behavior after app process restart. +- [ ] Confirm no data loss under rapid mutation bursts. + +**Acceptance criteria** + +- Persistence survives app restarts. +- Transaction semantics hold under lifecycle transitions. + +## Deliverables + +1. `@tanstack/db-react-native-sqlite-persisted-collection` +2. RN and Expo entrypoint docs/examples +3. Mobile-focused integration tests + +## Test Plan + +- Shared adapter contract suite where harness supports mobile runtime. +- RN integration tests: + - loadSubset startup path + - mutation persistence + - restart durability +- Expo integration tests with equivalent scenarios. + +## Risks and Mitigations + +- **Risk:** runtime differences between RN and Expo initialization. + - **Mitigation:** isolate host bootstrapping in thin entrypoint layer. +- **Risk:** mobile backgrounding interrupts in-flight writes. + - **Mitigation:** short transactions and robust retry/rollback handling. +- **Risk:** driver behavior divergence from node. + - **Mitigation:** enforce shared contract tests against both runtimes. + +## Exit Criteria + +- Unified mobile package works on RN and Expo. +- Contract and lifecycle tests pass in both environments. +- Documentation clearly explains host-specific setup steps. diff --git a/persistance-plan/phase-6-cloudflare-durable-objects.md b/persistance-plan/phase-6-cloudflare-durable-objects.md new file mode 100644 index 000000000..e29d6a903 --- /dev/null +++ b/persistance-plan/phase-6-cloudflare-durable-objects.md @@ -0,0 +1,86 @@ +# Phase 6 - Cloudflare Durable Objects + +## Objective + +Implement Durable Object-native SQLite persistence using in-process execution (no browser election path), while preserving wrapper semantics for both inferred modes. + +## Dependencies + +- Phase 2 wrapper behavior complete +- Phase 3 core adapter complete + +## Scope + +1. Build DO SQLite adapter package for code executing inside the DO instance. +2. Provide schema initialization and version check helper utilities. +3. Support sync-present and sync-absent wrapper modes in DO runtime. +4. Validate behavior with Workers/DO integration harness. + +## Non-Goals + +- Browser lock/election protocols +- Remote DB proxy adapter pattern + +## Detailed Workstreams + +### Workstream A - DO Adapter Binding + +- [ ] Map DO SQL storage APIs to `SQLiteDriver` contract. +- [ ] Ensure transaction semantics align with core adapter expectations. +- [ ] Provide helper for collection table mapping initialization. + +**Acceptance criteria** + +- Core adapter runs with no DO-specific branching beyond driver wrapper. + +### Workstream B - Runtime Semantics + +- [ ] Default coordinator to `SingleProcessCoordinator`. +- [ ] Confirm no browser RPC/election method requirements. +- [ ] Ensure sync-absent mode behaves as first-class local persistence path. + +**Acceptance criteria** + +- DO runtime operates correctly without multi-tab coordination logic. + +### Workstream C - Schema and Recovery + +- [ ] Implement startup schema version checks per object instance. +- [ ] Support clear-on-mismatch for sync-present mode. +- [ ] Support throw-on-mismatch default for sync-absent mode. +- [ ] Validate restart and rehydrate paths. + +**Acceptance criteria** + +- Schema policy matches global design contract. +- Object restarts recover state cleanly. + +## Deliverables + +1. `@tanstack/db-cloudflare-do-sqlite-persisted-collection` +2. DO initialization helpers and usage docs +3. DO integration test suite + +## Test Plan + +- Workers/DO integration tests for: + - schema init and mismatch behavior + - local-first `loadSubset` + - sync-absent mutation persistence + - restart durability + - no-election path correctness + +## Risks and Mitigations + +- **Risk:** subtle API mismatch in DO SQL wrapper. + - **Mitigation:** adapter conformance tests at driver boundary. +- **Risk:** incorrect assumptions about single-threaded execution. + - **Mitigation:** explicit `SingleProcessCoordinator` semantics and tests. +- **Risk:** schema resets during active request bursts. + - **Mitigation:** transactional reset flow and deterministic error handling. + +## Exit Criteria + +- DO package passes integration suite. +- Both inferred modes work in DO runtime. +- Runtime docs clarify in-process model and limitations. diff --git a/persistance-plan/phase-7-browser-single-tab.md b/persistance-plan/phase-7-browser-single-tab.md new file mode 100644 index 000000000..213ce7b21 --- /dev/null +++ b/persistance-plan/phase-7-browser-single-tab.md @@ -0,0 +1,85 @@ +# Phase 7 - Browser Single-Tab (`wa-sqlite`, No Election) + +## Objective + +Deliver stable browser persistence for single-tab usage using `wa-sqlite` + `OPFSCoopSyncVFS`, without requiring BroadcastChannel or Web Locks. + +## Dependencies + +- Phase 2 wrapper behavior complete +- Phase 3 core adapter complete + +## Scope + +1. Implement OPFS-backed browser SQLite driver. +2. Run wrapper in single-process coordination mode. +3. Validate local-first behavior with offline/online transitions. +4. Ensure system is correct without multi-tab infrastructure. + +## Non-Goals + +- Web Locks leadership election +- Cross-tab mutation RPC + +## Detailed Workstreams + +### Workstream A - Browser Driver Implementation + +- [x] Integrate `wa-sqlite` with `OPFSCoopSyncVFS`. +- [x] Build browser `SQLiteDriver` wrapper. +- [x] Handle startup/open/reopen lifecycle and capability checks. +- [x] Run OPFS sync-handle access inside a dedicated Web Worker. + +**Acceptance criteria** + +- Browser driver initializes and reopens persisted DB correctly. +- Capability errors are surfaced as `PersistenceUnavailableError` where required. + +### Workstream B - Single-Tab Runtime Wiring + +- [x] Use `SingleProcessCoordinator` semantics in browser single-tab mode. +- [x] Ensure no dependencies on BroadcastChannel/Web Locks. +- [x] Validate sync-present and sync-absent wrapper modes. + +**Acceptance criteria** + +- Single-tab mode functions fully offline-first with local writes and reads. + +### Workstream C - Offline/Online Behavior + +- [x] Validate offline `loadSubset` local path for sync-present mode. +- [x] Validate remote ensure replay on reconnect. +- [x] Validate sync-absent behavior unaffected by network transitions. + +**Acceptance criteria** + +- Correct data convergence after reconnect. + +## Deliverables + +1. Browser single-tab adapter/runtime package updates. +2. Capability detection and error handling behavior. +3. Browser integration tests for single-tab mode. + +## Test Plan + +- Browser integration suite: + - OPFS init and reopen + - mutation persistence correctness + - sync-present offline + reconnect replay + - no Web Locks/BroadcastChannel dependency + +## Risks and Mitigations + +- **Risk:** OPFS support differences across browsers. + - **Mitigation:** capability matrix and clear fallback policy. +- **Risk:** WASM startup latency. + - **Mitigation:** lazy init and connection reuse. +- **Risk:** accidental dependency on multi-tab APIs. + - **Mitigation:** explicit tests with those APIs unavailable. + +## Exit Criteria + +- Browser single-tab integration tests are green. +- Offline-first behavior proven for both inferred modes. +- No election/multi-tab runtime requirements remain in this phase. diff --git a/persistance-plan/phase-8-browser-multi-tab.md b/persistance-plan/phase-8-browser-multi-tab.md new file mode 100644 index 000000000..7ee8ee497 --- /dev/null +++ b/persistance-plan/phase-8-browser-multi-tab.md @@ -0,0 +1,157 @@ +# Phase 8 - Browser Multi-Tab Coordinator (Final Phase) + +## Objective + +Implement robust multi-tab coordination using Web Locks, Visibility API, and BroadcastChannel with collection-scoped leadership and DB-wide write serialization. + +## Dependencies + +- Phase 7 browser single-tab stable +- Phase 2/3 recovery and row-version logic available + +## Scope + +1. Implement `BrowserCollectionCoordinator` with election and heartbeat. +2. Implement collection-scoped leader/follower behavior for both inferred modes. +3. Implement mutation RPC and follower acknowledgment/rollback handling. +4. Implement seq-gap recovery (`pullSince`) and stale fallback. +5. Implement DB writer lock (`tsdb:writer:`) and contention policy. +6. Validate multi-tab behavior via Playwright. + +## Non-Goals + +- SharedWorker architecture +- Global single-writer ownership across all collections + +## Implementation Status + +> **Overall: IMPLEMENTED** — `BrowserCollectionCoordinator` class implemented in +> `packages/db-browser-wa-sqlite-persisted-collection/src/browser-coordinator.ts`. +> Exported from package index. Unit tests with Web Locks and BroadcastChannel +> mocks pass (15 tests). Remaining: hidden-tab stepdown, heartbeat timeout +> detection, and Playwright multi-tab integration tests. + +## Detailed Workstreams + +### Workstream A - Leadership and Heartbeats + +- [x] Acquire per-collection Web Lock (`tsdb:leader::`). _(implemented in `browser-coordinator.ts` via `navigator.locks.request` with abort signal)_ +- [x] Increment durable `leader_term` transactionally on leadership gain. _(storage-level `leader_term` table in `sqlite-core-adapter.ts`; coordinator increments in-memory term on lock acquisition after restoring from `getStreamPosition`)_ +- [x] Emit leader heartbeat with latest seq/rowVersion. _(implemented in `browser-coordinator.ts` via `emitHeartbeat` on interval `HEARTBEAT_INTERVAL_MS=3000`)_ +- [ ] Detect heartbeat timeout and trigger takeover attempts. _(not needed for Web Locks approach — lock release is automatic on tab close/crash; deferred to future iteration if needed)_ +- [ ] Implement hidden-tab cooperative stepdown and cooldown. _(deferred — Web Locks handle crash/close; Visibility API stepdown is a future optimization)_ + +**Acceptance criteria** + +- Exactly one leader per collection at a time. +- Leadership term never decrements across reload/restart. + +### Workstream B - Protocol Transport and RPC + +- [x] Implement BroadcastChannel envelope transport per collection. _(single `BroadcastChannel` per coordinator instance `tsdb:coord:`, messages routed by `collectionId` field)_ +- [x] Implement request/response correlation via `rpcId`. _(implemented in `sendRPCOnce` with `pendingRPCs` map and timeout)_ +- [x] Implement RPC handlers: + - `ensureRemoteSubset` _(leader handler returns ok — leader's own sync handles the subset)_ + - `ensurePersistedIndex` _(leader handler calls `adapter.ensureIndex` under writer lock)_ + - `applyLocalMutations` _(leader handler applies tx, broadcasts `tx:committed`, returns accepted ids)_ + - `pullSince` _(leader handler delegates to `adapter.pullSince` and returns result)_ +- [x] Implement retry/backoff and timeout behavior. _(RPC_TIMEOUT_MS=10000, RPC_RETRY_ATTEMPTS=2, RPC_RETRY_DELAY_MS=200 with linear backoff)_ + +**Acceptance criteria** + +- RPCs are correlated, timed out, retried, and idempotent where required. + +### Workstream C - Mutation Routing and Acknowledgment + +- [x] Route follower sync-absent mutations to current leader. _(follower calls `requestApplyLocalMutations` which sends RPC to leader via BroadcastChannel)_ +- [x] Dedupe mutation envelopes by `envelopeId` at leader. _(`appliedEnvelopeIds` map with 60s TTL pruning)_ +- [x] Return accepted mutation ids and resulting `(term, seq, rowVersion)`. _(leader handler returns full `ApplyLocalMutationsResponse`)_ +- [x] Confirm/rollback optimistic local entries in follower based on response. _(caller side in `persisted.ts:1340-1368` handles ok/error responses and validates accepted mutation ids)_ + +**Acceptance criteria** + +- At-least-once mutation delivery yields exactly-once logical apply. + +### Workstream D - Commit Ordering and Recovery + +- [x] Broadcast `tx:committed` after DB commit only. _(implemented in `persisted.ts:1201-1215` and `persisted.ts:1376-1389`; leader handler in coordinator broadcasts after `applyCommittedTx`)_ +- [x] Track follower last seen `(term, seq)` and rowVersion. _(implemented in `persisted.ts:1449-1474` via `observeStreamPosition`; restored from DB on startup via `getStreamPosition`)_ +- [x] On seq gap, invoke `pullSince(lastSeenRowVersion)`. _(implemented in `persisted.ts:1642-1651` gap detection and `persisted.ts:1662-1684` recovery)_ +- [x] Apply targeted invalidation when key count is within limit. _(implemented in `persisted.ts:1705-1738` with `TARGETED_INVALIDATION_KEY_LIMIT` and inline row data in `changedRows`)_ +- [x] Trigger full reload when required or when pull fails. _(implemented in `persisted.ts:1708-1711` for `requiresFullReload`, `persisted.ts:1715-1718` for over-limit, and `persisted.ts:1684` as fallback)_ + +**Acceptance criteria** + +- Followers converge after dropped broadcasts. +- Recovery works without full page reload. + +### Workstream E - DB Write Serialization + +- [x] Implement DB writer lock (`tsdb:writer:`). _(implemented in `browser-coordinator.ts` via `withWriterLock` using `navigator.locks.request`)_ +- [x] Serialize physical SQLite write transactions across collection leaders. _(all leader-side adapter writes go through `withWriterLock`)_ +- [x] Apply bounded busy retries and backoff policy. _(WRITER_LOCK_MAX_RETRIES=20, WRITER_LOCK_BUSY_RETRY_MS=50 with capped linear backoff)_ + +**Acceptance criteria** + +- No correctness loss under cross-collection write contention. + +## Deliverables + +1. Browser multi-tab coordinator implementation. +2. Protocol transport and RPC machinery. +3. Recovery and invalidation orchestration in browser runtime. +4. Playwright multi-tab test suite. + +## Test Plan + +### Unit Tests (Completed) + +Tests in `tests/browser-coordinator.test.ts` using Web Locks and BroadcastChannel mocks: + +1. Leadership acquisition and release. +2. Leadership takeover on dispose. +3. Independent leadership per collection. +4. Message transport between coordinators. +5. Self-message filtering. +6. Leader applies mutations directly. +7. Follower routes mutations to leader via RPC. +8. Envelope ID deduplication. +9. Leader handles pullSince directly. +10. Follower routes pullSince to leader via RPC. +11. Leader ensures persisted index locally. +12. Follower routes ensurePersistedIndex to leader. +13. Cleanup on dispose. + +### Playwright Multi-Tab Scenarios (Not Yet Implemented) + +1. Two tabs leading different collections simultaneously. +2. Reads served locally without leader-proxy round trips. +3. Follower mutation routing and ack/rollback flow. +4. Visibility-driven leader handoff behavior. +5. Tab close/crash leadership takeover. +6. Commit-broadcast gap recovery via heartbeat + pullSince. +7. Cross-collection write contention correctness under writer lock. +8. Sync-present offline-first and reconnect convergence. + +### Fault Injection Tests (Not Yet Implemented) + +- Drop selected BroadcastChannel messages. +- Delay/reorder RPC responses. +- Force leader stepdown mid-mutation. + +## Risks and Mitigations + +- **Risk:** browser API inconsistencies (Web Locks/visibility). + - **Mitigation:** strict capability checks and conservative fallbacks. +- **Risk:** lock thrash during visibility transitions. + - **Mitigation:** stepdown delay + reacquire cooldown. +- **Risk:** high contention causes latency spikes. + - **Mitigation:** DB writer lock + bounded retry with telemetry. +- **Risk:** mutation duplicates under retries. + - **Mitigation:** `envelopeId` dedupe and idempotent leader apply. + +## Exit Criteria + +- Playwright multi-tab suite is green and stable. +- Leadership, ordering, mutation routing, and recovery invariants hold under fault tests. +- Browser multi-tab marked GA-ready for both inferred modes. diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index ba6b76d92..c7ae01fb6 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -923,6 +923,28 @@ importers: specifier: ^12.6.2 version: 12.8.0 + packages/db-cloudflare-do-sqlite-persisted-collection: + dependencies: + '@tanstack/db-sqlite-persisted-collection-core': + specifier: workspace:* + version: link:../db-sqlite-persisted-collection-core + typescript: + specifier: '>=4.7' + version: 5.9.3 + devDependencies: + '@types/better-sqlite3': + specifier: ^7.6.13 + version: 7.6.13 + '@vitest/coverage-istanbul': + specifier: ^3.2.4 + version: 3.2.4(vitest@3.2.4) + better-sqlite3: + specifier: ^12.6.2 + version: 12.8.0 + wrangler: + specifier: ^4.64.0 + version: 4.75.0 + packages/db-collection-e2e: dependencies: '@tanstack/db': @@ -2153,6 +2175,49 @@ packages: '@changesets/write@0.4.0': resolution: {integrity: sha512-CdTLvIOPiCNuH71pyDu3rA+Q0n65cmAbXnwWH84rKGiFumFzkmHNT8KHTMEchcxN+Kl8I54xGUhJ7l3E7X396Q==} + '@cloudflare/kv-asset-handler@0.4.2': + resolution: {integrity: sha512-SIOD2DxrRRwQ+jgzlXCqoEFiKOFqaPjhnNTGKXSRLvp1HiOvapLaFG2kEr9dYQTYe8rKrd9uvDUzmAITeNyaHQ==} + engines: {node: '>=18.0.0'} + + '@cloudflare/unenv-preset@2.15.0': + resolution: {integrity: sha512-EGYmJaGZKWl+X8tXxcnx4v2bOZSjQeNI5dWFeXivgX9+YCT69AkzHHwlNbVpqtEUTbew8eQurpyOpeN8fg00nw==} + peerDependencies: + unenv: 2.0.0-rc.24 + workerd: 1.20260301.1 || ~1.20260302.1 || ~1.20260303.1 || ~1.20260304.1 || >1.20260305.0 <2.0.0-0 + peerDependenciesMeta: + workerd: + optional: true + + '@cloudflare/workerd-darwin-64@1.20260317.1': + resolution: {integrity: sha512-8hjh3sPMwY8M/zedq3/sXoA2Q4BedlGufn3KOOleIG+5a4ReQKLlUah140D7J6zlKmYZAFMJ4tWC7hCuI/s79g==} + engines: {node: '>=16'} + cpu: [x64] + os: [darwin] + + '@cloudflare/workerd-darwin-arm64@1.20260317.1': + resolution: {integrity: sha512-M/MnNyvO5HMgoIdr3QHjdCj2T1ki9gt0vIUnxYxBu9ISXS/jgtMl6chUVPJ7zHYBn9MyYr8ByeN6frjYxj0MGg==} + engines: {node: '>=16'} + cpu: [arm64] + os: [darwin] + + '@cloudflare/workerd-linux-64@1.20260317.1': + resolution: {integrity: sha512-1ltuEjkRcS3fsVF7CxsKlWiRmzq2ZqMfqDN0qUOgbUwkpXsLVJsXmoblaLf5OP00ELlcgF0QsN0p2xPEua4Uug==} + engines: {node: '>=16'} + cpu: [x64] + os: [linux] + + '@cloudflare/workerd-linux-arm64@1.20260317.1': + resolution: {integrity: sha512-3QrNnPF1xlaNwkHpasvRvAMidOvQs2NhXQmALJrEfpIJ/IDL2la8g499yXp3eqhG3hVMCB07XVY149GTs42Xtw==} + engines: {node: '>=16'} + cpu: [arm64] + os: [linux] + + '@cloudflare/workerd-windows-64@1.20260317.1': + resolution: {integrity: sha512-MfZTz+7LfuIpMGTa3RLXHX8Z/pnycZLItn94WRdHr8LPVet+C5/1Nzei399w/jr3+kzT4pDKk26JF/tlI5elpQ==} + engines: {node: '>=16'} + cpu: [x64] + os: [win32] + '@colors/colors@1.5.0': resolution: {integrity: sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==} engines: {node: '>=0.1.90'} @@ -2165,6 +2230,10 @@ packages: resolution: {integrity: sha512-KTy0OqRDLR5y/zZMnizyx09z/rPlPC/zKhYgH8o/q6PuAjoQAKlRfY4zzv0M64yybQ//6//4H1n14pxaLZfUnA==} engines: {node: '>=v18'} + '@cspotcode/source-map-support@0.8.1': + resolution: {integrity: sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==} + engines: {node: '>=12'} + '@csstools/color-helpers@5.1.0': resolution: {integrity: sha512-S11EXWJyy0Mz5SYvRmY8nJYTFFd1LCNV+7cXyAgQtOOuzb4EsgfqDufL+9esx72/eLhsRdGZwaldu/h+E4t4BA==} engines: {node: '>=18'} @@ -3250,6 +3319,143 @@ packages: resolution: {integrity: sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==} engines: {node: '>=18.18'} + '@img/colour@1.1.0': + resolution: {integrity: sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ==} + engines: {node: '>=18'} + + '@img/sharp-darwin-arm64@0.34.5': + resolution: {integrity: sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [darwin] + + '@img/sharp-darwin-x64@0.34.5': + resolution: {integrity: sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [darwin] + + '@img/sharp-libvips-darwin-arm64@1.2.4': + resolution: {integrity: sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==} + cpu: [arm64] + os: [darwin] + + '@img/sharp-libvips-darwin-x64@1.2.4': + resolution: {integrity: sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==} + cpu: [x64] + os: [darwin] + + '@img/sharp-libvips-linux-arm64@1.2.4': + resolution: {integrity: sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==} + cpu: [arm64] + os: [linux] + + '@img/sharp-libvips-linux-arm@1.2.4': + resolution: {integrity: sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==} + cpu: [arm] + os: [linux] + + '@img/sharp-libvips-linux-ppc64@1.2.4': + resolution: {integrity: sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==} + cpu: [ppc64] + os: [linux] + + '@img/sharp-libvips-linux-riscv64@1.2.4': + resolution: {integrity: sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==} + cpu: [riscv64] + os: [linux] + + '@img/sharp-libvips-linux-s390x@1.2.4': + resolution: {integrity: sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==} + cpu: [s390x] + os: [linux] + + '@img/sharp-libvips-linux-x64@1.2.4': + resolution: {integrity: sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==} + cpu: [x64] + os: [linux] + + '@img/sharp-libvips-linuxmusl-arm64@1.2.4': + resolution: {integrity: sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==} + cpu: [arm64] + os: [linux] + + '@img/sharp-libvips-linuxmusl-x64@1.2.4': + resolution: {integrity: sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==} + cpu: [x64] + os: [linux] + + '@img/sharp-linux-arm64@0.34.5': + resolution: {integrity: sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [linux] + + '@img/sharp-linux-arm@0.34.5': + resolution: {integrity: sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm] + os: [linux] + + '@img/sharp-linux-ppc64@0.34.5': + resolution: {integrity: sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [ppc64] + os: [linux] + + '@img/sharp-linux-riscv64@0.34.5': + resolution: {integrity: sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [riscv64] + os: [linux] + + '@img/sharp-linux-s390x@0.34.5': + resolution: {integrity: sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [s390x] + os: [linux] + + '@img/sharp-linux-x64@0.34.5': + resolution: {integrity: sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [linux] + + '@img/sharp-linuxmusl-arm64@0.34.5': + resolution: {integrity: sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [linux] + + '@img/sharp-linuxmusl-x64@0.34.5': + resolution: {integrity: sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [linux] + + '@img/sharp-wasm32@0.34.5': + resolution: {integrity: sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [wasm32] + + '@img/sharp-win32-arm64@0.34.5': + resolution: {integrity: sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [arm64] + os: [win32] + + '@img/sharp-win32-ia32@0.34.5': + resolution: {integrity: sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [ia32] + os: [win32] + + '@img/sharp-win32-x64@0.34.5': + resolution: {integrity: sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + cpu: [x64] + os: [win32] + '@inquirer/checkbox@4.2.2': resolution: {integrity: sha512-E+KExNurKcUJJdxmjglTl141EwxWyAHplvsYJQgSwXf8qiNWkTxTuCCqmhFEmbIXd4zLaGMfQFJ6WrZ7fSeV3g==} engines: {node: '>=18'} @@ -3455,6 +3661,9 @@ packages: '@jridgewell/trace-mapping@0.3.31': resolution: {integrity: sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==} + '@jridgewell/trace-mapping@0.3.9': + resolution: {integrity: sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==} + '@kwsites/file-exists@1.1.1': resolution: {integrity: sha512-m9/5YGR18lIwxSFDwfE3oA7bWuq9kdau6ugN4H2rJeyhFQZcG9AgSHkQtSD15a8WvTgfz9aikZMrKPHvbpqFiw==} @@ -3975,6 +4184,15 @@ packages: '@polka/url@1.0.0-next.29': resolution: {integrity: sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==} + '@poppinss/colors@4.1.6': + resolution: {integrity: sha512-H9xkIdFswbS8n1d6vmRd8+c10t2Qe+rZITbbDHHkQixH5+2x1FDGmi/0K+WgWiqQFKPSlIYB7jlH6Kpfn6Fleg==} + + '@poppinss/dumper@0.6.5': + resolution: {integrity: sha512-NBdYIb90J7LfOI32dOewKI1r7wnkiH6m920puQ3qHUeZkxNkQiFnXVWoE6YtFSv6QOiPPf7ys6i+HWWecDz7sw==} + + '@poppinss/exception@1.2.3': + resolution: {integrity: sha512-dCED+QRChTVatE9ibtoaxc+WkdzOSjYTKi/+uacHWIsfodVfpsueo3+DKpgU5Px8qXjgmXkSvhXvSCz3fnP9lw==} + '@powersync/common@1.49.0': resolution: {integrity: sha512-g6uonubvtmtyx8hS/G5trg9LsBvzHY3tAKHiV7SIQV3Xyz9ONM6NNnjDMP2vcLZVmsOSi8x/QJZmy/ig1YtBMg==} @@ -4367,6 +4585,10 @@ packages: resolution: {integrity: sha512-t09vSN3MdfsyCHoFcTRCH/iUtG7OJ0CsjzB8cjAmKc/va/kIgeDI/TxsigdncE/4be734m0cvIYwNaV4i2XqAw==} engines: {node: '>=10'} + '@sindresorhus/is@7.2.0': + resolution: {integrity: sha512-P1Cz1dWaFfR4IR+U13mqqiGsLFf1KbayybWwdd2vfctdV6hDpUkgCY0nKOLLTMSoRd/jJNjtbqzf13K8DCCXQw==} + engines: {node: '>=18'} + '@sinonjs/commons@3.0.1': resolution: {integrity: sha512-K3mCHKQ9sVh8o1C9cxkwxaOmXoAMlDxC1mYyHrjqOWEcBjYr76t96zL2zlj5dUGZ3HSw240X1qgH3Mjf1yJWpQ==} @@ -4471,6 +4693,9 @@ packages: '@solidjs/router': optional: true + '@speed-highlight/core@1.2.15': + resolution: {integrity: sha512-BMq1K3DsElxDWawkX6eLg9+CKJrTVGCBAWVuHXVUV2u0s2711qiChLSId6ikYPfxhdYocLNt3wWwSvDiTvFabw==} + '@standard-schema/spec@1.1.0': resolution: {integrity: sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==} @@ -5763,6 +5988,9 @@ packages: bl@4.1.0: resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==} + blake3-wasm@2.1.5: + resolution: {integrity: sha512-F1+K8EbfOZE49dtoPtmxUQrpXaBIl3ICvasLh+nJta0xkz+9kF/7uet9fLnwKqhDrmj6g+6K3Tw9yQPUg2ka5g==} + body-parser@1.20.3: resolution: {integrity: sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==} engines: {node: '>= 0.8', npm: 1.2.8000 || >= 1.4.16} @@ -6141,6 +6369,10 @@ packages: resolution: {integrity: sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==} engines: {node: '>= 0.6'} + cookie@1.1.1: + resolution: {integrity: sha512-ei8Aos7ja0weRpFzJnEA9UHJ/7XQmqglbRwnf2ATjcB9Wq874VKH9kfjjirM6UhU2/E5fFYadylyhFldcqSidQ==} + engines: {node: '>=18'} + copy-anything@4.0.5: resolution: {integrity: sha512-7Vv6asjS4gMOuILabD3l739tsaxFQmC+a7pLZm02zyvs8p977bL3zEgq3yDk5rn9B0PbYgIv++jmHcuUab4RhA==} engines: {node: '>=18'} @@ -6618,6 +6850,9 @@ packages: error-ex@1.3.4: resolution: {integrity: sha512-sqQamAnR14VgCr1A618A3sGrygcpK+HEbenA/HiEAkkUwcZIIB/tgWqHFxWgOyDh4nB4JCRimh79dR5Ywc9MDQ==} + error-stack-parser-es@1.0.5: + resolution: {integrity: sha512-5qucVt2XcuGMcEGgWI7i+yZpmpByQ8J1lHhcL7PwqCwu9FPP3VUXzT4ltHe5i2z9dePwEHcDVOAfSnHsOlCXRA==} + error-stack-parser@2.1.4: resolution: {integrity: sha512-Sk5V6wVazPhq5MhpO+AUxJn5x7XSXGl1R93Vn7i+zS15KDVxQijejNCrz8340/2bgLBjR9GtEG8ZVKONDjcqGQ==} @@ -8587,6 +8822,11 @@ packages: mingo@6.5.6: resolution: {integrity: sha512-XV89xbTakngi/oIEpuq7+FXXYvdA/Ht6aAsNTuIl8zLW1jfv369Va1PPWod1UTa/cqL0pC6LD2P6ggBcSSeH+A==} + miniflare@4.20260317.0: + resolution: {integrity: sha512-xuwk5Kjv+shi5iUBAdCrRl9IaWSGnTU8WuTQzsUS2GlSDIMCJuu8DiF/d9ExjMXYiQG5ml+k9SVKnMj8cRkq0w==} + engines: {node: '>=18.0.0'} + hasBin: true + minimatch@10.2.4: resolution: {integrity: sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==} engines: {node: 18 || 20 || >=22} @@ -9145,6 +9385,9 @@ packages: resolution: {integrity: sha512-oWyT4gICAu+kaA7QWk/jvCHWarMKNs6pXOGWKDTr7cw4IGcUbW+PeTfbaQiLGheFRpjo6O9J0PmyMfQPjH71oA==} engines: {node: 20 || >=22} + path-to-regexp@6.3.0: + resolution: {integrity: sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ==} + path-to-regexp@8.3.0: resolution: {integrity: sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==} @@ -9854,6 +10097,10 @@ packages: shallowequal@1.1.0: resolution: {integrity: sha512-y0m1JoUZSlPAjXVtPPW70aZWfIL/dSP7AFkRnniLCrK/8MDKog3TySTBmckD+RObVxH0v4Tox67+F14PdED2oQ==} + sharp@0.34.5: + resolution: {integrity: sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==} + engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} + shebang-command@1.2.0: resolution: {integrity: sha512-EV3L1+UQWGor21OmnvojK36mhg+TyIKDh3iFBKBohr5xeXIhNBcx8oWdgkTEEQ+BEFFYdLRuqMfd5L84N1V5Vg==} engines: {node: '>=0.10.0'} @@ -10261,6 +10508,10 @@ packages: resolution: {integrity: sha512-H+ue8Zo4vJmV2nRjpx86P35lzwDT3nItnIsocgumgr0hHMQ+ZGq5vrERg9kJBo5AWGmxZDhzDo+WVIJqkB0cGA==} engines: {node: '>=16'} + supports-color@10.2.2: + resolution: {integrity: sha512-SS+jx45GF1QjgEXQx4NJZV9ImqmO2NPz5FNsIHrsDjh2YsHnawpan7SNQ1o8NuhrbHZy9AZhIoCUiCeaW/C80g==} + engines: {node: '>=18'} + supports-color@5.5.0: resolution: {integrity: sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==} engines: {node: '>=4'} @@ -10607,6 +10858,13 @@ packages: resolution: {integrity: sha512-Hn2tCQpoDt1wv23a68Ctc8Cr/BHpUSfaPYrkajTXOS9IKpxVRx/X5m1K2YkbK2ipgZgxXSgsUinl3x+2YdSSfg==} engines: {node: '>=20.18.1'} + undici@7.24.4: + resolution: {integrity: sha512-BM/JzwwaRXxrLdElV2Uo6cTLEjhSb3WXboncJamZ15NgUURmvlXvxa6xkwIOILIjPNo9i8ku136ZvWV0Uly8+w==} + engines: {node: '>=20.18.1'} + + unenv@2.0.0-rc.24: + resolution: {integrity: sha512-i7qRCmY42zmCwnYlh9H2SvLEypEFGye5iRmEMKjcGi7zk9UquigRjFtTLz0TYqr0ZGLZhaMHl/foy1bZR+Cwlw==} + unicode-canonical-property-names-ecmascript@2.0.1: resolution: {integrity: sha512-dA8WbNeb2a6oQzAQ55YlT5vQAWGV9WXOsi3SskE3bcCdM0P4SDd+24zS/OCacdRq5BkdsRj9q3Pg6YyQoxIGqg==} engines: {node: '>=4'} @@ -11037,6 +11295,21 @@ packages: resolution: {integrity: sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==} engines: {node: '>=0.10.0'} + workerd@1.20260317.1: + resolution: {integrity: sha512-ZuEq1OdrJBS+NV+L5HMYPCzVn49a2O60slQiiLpG44jqtlOo+S167fWC76kEXteXLLLydeuRrluRel7WdOUa4g==} + engines: {node: '>=16'} + hasBin: true + + wrangler@4.75.0: + resolution: {integrity: sha512-Efk1tcnm4eduBYpH1sSjMYydXMnIFPns/qABI3+fsbDrUk5GksNYX8nYGVP4sFygvGPO7kJc36YJKB5ooA7JAg==} + engines: {node: '>=20.0.0'} + hasBin: true + peerDependencies: + '@cloudflare/workers-types': ^4.20260317.1 + peerDependenciesMeta: + '@cloudflare/workers-types': + optional: true + wrap-ansi@6.2.0: resolution: {integrity: sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==} engines: {node: '>=8'} @@ -11095,6 +11368,18 @@ packages: utf-8-validate: optional: true + ws@8.18.0: + resolution: {integrity: sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw==} + engines: {node: '>=10.0.0'} + peerDependencies: + bufferutil: ^4.0.1 + utf-8-validate: '>=5.0.2' + peerDependenciesMeta: + bufferutil: + optional: true + utf-8-validate: + optional: true + ws@8.18.3: resolution: {integrity: sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==} engines: {node: '>=10.0.0'} @@ -11204,6 +11489,12 @@ packages: resolution: {integrity: sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw==} engines: {node: '>=18'} + youch-core@0.3.3: + resolution: {integrity: sha512-ho7XuGjLaJ2hWHoK8yFnsUGy2Y5uDpqSTq1FkHLK4/oqKtyUU1AFbOOxY4IpC9f0fTLjwYbslUz0Po5BpD1wrA==} + + youch@4.1.0-beta.10: + resolution: {integrity: sha512-rLfVLB4FgQneDr0dv1oddCVZmKjcJ6yX6mS4pU82Mq/Dt9a3cLZQ62pDBL4AUO+uVrCvtWz3ZFUL2HFAFJ/BXQ==} + z-schema@6.0.2: resolution: {integrity: sha512-9fQb2ZhpMD0ZQXYw0ll5ya6uLQm3Xtt4DXY2RV3QO1QVI4ihSzSWirlgkDsMgGg4qK0EV4tLOJgRSH2bn0cbIw==} engines: {node: '>=16.0.0'} @@ -12337,6 +12628,29 @@ snapshots: human-id: 4.1.1 prettier: 2.8.8 + '@cloudflare/kv-asset-handler@0.4.2': {} + + '@cloudflare/unenv-preset@2.15.0(unenv@2.0.0-rc.24)(workerd@1.20260317.1)': + dependencies: + unenv: 2.0.0-rc.24 + optionalDependencies: + workerd: 1.20260317.1 + + '@cloudflare/workerd-darwin-64@1.20260317.1': + optional: true + + '@cloudflare/workerd-darwin-arm64@1.20260317.1': + optional: true + + '@cloudflare/workerd-linux-64@1.20260317.1': + optional: true + + '@cloudflare/workerd-linux-arm64@1.20260317.1': + optional: true + + '@cloudflare/workerd-windows-64@1.20260317.1': + optional: true + '@colors/colors@1.5.0': {} '@commitlint/parse@20.2.0': @@ -12350,6 +12664,10 @@ snapshots: '@types/conventional-commits-parser': 5.0.2 chalk: 5.6.2 + '@cspotcode/source-map-support@0.8.1': + dependencies: + '@jridgewell/trace-mapping': 0.3.9 + '@csstools/color-helpers@5.1.0': {} '@csstools/css-calc@2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)': @@ -13451,6 +13769,102 @@ snapshots: '@humanwhocodes/retry@0.4.3': {} + '@img/colour@1.1.0': {} + + '@img/sharp-darwin-arm64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-darwin-arm64': 1.2.4 + optional: true + + '@img/sharp-darwin-x64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-darwin-x64': 1.2.4 + optional: true + + '@img/sharp-libvips-darwin-arm64@1.2.4': + optional: true + + '@img/sharp-libvips-darwin-x64@1.2.4': + optional: true + + '@img/sharp-libvips-linux-arm64@1.2.4': + optional: true + + '@img/sharp-libvips-linux-arm@1.2.4': + optional: true + + '@img/sharp-libvips-linux-ppc64@1.2.4': + optional: true + + '@img/sharp-libvips-linux-riscv64@1.2.4': + optional: true + + '@img/sharp-libvips-linux-s390x@1.2.4': + optional: true + + '@img/sharp-libvips-linux-x64@1.2.4': + optional: true + + '@img/sharp-libvips-linuxmusl-arm64@1.2.4': + optional: true + + '@img/sharp-libvips-linuxmusl-x64@1.2.4': + optional: true + + '@img/sharp-linux-arm64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-arm64': 1.2.4 + optional: true + + '@img/sharp-linux-arm@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-arm': 1.2.4 + optional: true + + '@img/sharp-linux-ppc64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-ppc64': 1.2.4 + optional: true + + '@img/sharp-linux-riscv64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-riscv64': 1.2.4 + optional: true + + '@img/sharp-linux-s390x@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-s390x': 1.2.4 + optional: true + + '@img/sharp-linux-x64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linux-x64': 1.2.4 + optional: true + + '@img/sharp-linuxmusl-arm64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linuxmusl-arm64': 1.2.4 + optional: true + + '@img/sharp-linuxmusl-x64@0.34.5': + optionalDependencies: + '@img/sharp-libvips-linuxmusl-x64': 1.2.4 + optional: true + + '@img/sharp-wasm32@0.34.5': + dependencies: + '@emnapi/runtime': 1.7.1 + optional: true + + '@img/sharp-win32-arm64@0.34.5': + optional: true + + '@img/sharp-win32-ia32@0.34.5': + optional: true + + '@img/sharp-win32-x64@0.34.5': + optional: true + '@inquirer/checkbox@4.2.2(@types/node@25.2.2)': dependencies: '@inquirer/core': 10.2.0(@types/node@25.2.2) @@ -13685,6 +14099,11 @@ snapshots: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.5.5 + '@jridgewell/trace-mapping@0.3.9': + dependencies: + '@jridgewell/resolve-uri': 3.1.2 + '@jridgewell/sourcemap-codec': 1.5.5 + '@kwsites/file-exists@1.1.1': dependencies: debug: 4.4.3 @@ -14159,6 +14578,18 @@ snapshots: '@polka/url@1.0.0-next.29': {} + '@poppinss/colors@4.1.6': + dependencies: + kleur: 4.1.5 + + '@poppinss/dumper@0.6.5': + dependencies: + '@poppinss/colors': 4.1.6 + '@sindresorhus/is': 7.2.0 + supports-color: 10.2.2 + + '@poppinss/exception@1.2.3': {} + '@powersync/common@1.49.0': dependencies: async-mutex: 0.5.0 @@ -14604,6 +15035,8 @@ snapshots: '@sindresorhus/is@4.6.0': {} + '@sindresorhus/is@7.2.0': {} + '@sinonjs/commons@3.0.1': dependencies: type-detect: 4.0.8 @@ -14732,6 +15165,8 @@ snapshots: '@testing-library/dom': 10.4.1 solid-js: 1.9.11 + '@speed-highlight/core@1.2.15': {} + '@standard-schema/spec@1.1.0': {} '@stylistic/eslint-plugin@5.9.0(eslint@9.39.3(jiti@2.6.1))': @@ -16365,6 +16800,8 @@ snapshots: inherits: 2.0.4 readable-stream: 3.6.2 + blake3-wasm@2.1.5: {} + body-parser@1.20.3: dependencies: bytes: 3.1.2 @@ -16823,6 +17260,8 @@ snapshots: cookie@0.7.2: {} + cookie@1.1.1: {} + copy-anything@4.0.5: dependencies: is-what: 5.5.0 @@ -17192,6 +17631,8 @@ snapshots: dependencies: is-arrayish: 0.2.1 + error-stack-parser-es@1.0.5: {} + error-stack-parser@2.1.4: dependencies: stackframe: 1.3.4 @@ -19635,6 +20076,18 @@ snapshots: mingo@6.5.6: {} + miniflare@4.20260317.0: + dependencies: + '@cspotcode/source-map-support': 0.8.1 + sharp: 0.34.5 + undici: 7.24.4 + workerd: 1.20260317.1 + ws: 8.18.0 + youch: 4.1.0-beta.10 + transitivePeerDependencies: + - bufferutil + - utf-8-validate + minimatch@10.2.4: dependencies: brace-expansion: 5.0.4 @@ -20242,6 +20695,8 @@ snapshots: lru-cache: 11.2.5 minipass: 7.1.2 + path-to-regexp@6.3.0: {} + path-to-regexp@8.3.0: {} path-type@4.0.0: {} @@ -21134,6 +21589,37 @@ snapshots: shallowequal@1.1.0: {} + sharp@0.34.5: + dependencies: + '@img/colour': 1.1.0 + detect-libc: 2.1.2 + semver: 7.7.4 + optionalDependencies: + '@img/sharp-darwin-arm64': 0.34.5 + '@img/sharp-darwin-x64': 0.34.5 + '@img/sharp-libvips-darwin-arm64': 1.2.4 + '@img/sharp-libvips-darwin-x64': 1.2.4 + '@img/sharp-libvips-linux-arm': 1.2.4 + '@img/sharp-libvips-linux-arm64': 1.2.4 + '@img/sharp-libvips-linux-ppc64': 1.2.4 + '@img/sharp-libvips-linux-riscv64': 1.2.4 + '@img/sharp-libvips-linux-s390x': 1.2.4 + '@img/sharp-libvips-linux-x64': 1.2.4 + '@img/sharp-libvips-linuxmusl-arm64': 1.2.4 + '@img/sharp-libvips-linuxmusl-x64': 1.2.4 + '@img/sharp-linux-arm': 0.34.5 + '@img/sharp-linux-arm64': 0.34.5 + '@img/sharp-linux-ppc64': 0.34.5 + '@img/sharp-linux-riscv64': 0.34.5 + '@img/sharp-linux-s390x': 0.34.5 + '@img/sharp-linux-x64': 0.34.5 + '@img/sharp-linuxmusl-arm64': 0.34.5 + '@img/sharp-linuxmusl-x64': 0.34.5 + '@img/sharp-wasm32': 0.34.5 + '@img/sharp-win32-arm64': 0.34.5 + '@img/sharp-win32-ia32': 0.34.5 + '@img/sharp-win32-x64': 0.34.5 + shebang-command@1.2.0: dependencies: shebang-regex: 1.0.0 @@ -21597,6 +22083,8 @@ snapshots: dependencies: copy-anything: 4.0.5 + supports-color@10.2.2: {} + supports-color@5.5.0: dependencies: has-flag: 3.0.0 @@ -21957,6 +22445,12 @@ snapshots: undici@7.21.0: {} + undici@7.24.4: {} + + unenv@2.0.0-rc.24: + dependencies: + pathe: 2.0.3 + unicode-canonical-property-names-ecmascript@2.0.1: {} unicode-match-property-ecmascript@2.0.0: @@ -22414,6 +22908,30 @@ snapshots: word-wrap@1.2.5: {} + workerd@1.20260317.1: + optionalDependencies: + '@cloudflare/workerd-darwin-64': 1.20260317.1 + '@cloudflare/workerd-darwin-arm64': 1.20260317.1 + '@cloudflare/workerd-linux-64': 1.20260317.1 + '@cloudflare/workerd-linux-arm64': 1.20260317.1 + '@cloudflare/workerd-windows-64': 1.20260317.1 + + wrangler@4.75.0: + dependencies: + '@cloudflare/kv-asset-handler': 0.4.2 + '@cloudflare/unenv-preset': 2.15.0(unenv@2.0.0-rc.24)(workerd@1.20260317.1) + blake3-wasm: 2.1.5 + esbuild: 0.27.3 + miniflare: 4.20260317.0 + path-to-regexp: 6.3.0 + unenv: 2.0.0-rc.24 + workerd: 1.20260317.1 + optionalDependencies: + fsevents: 2.3.3 + transitivePeerDependencies: + - bufferutil + - utf-8-validate + wrap-ansi@6.2.0: dependencies: ansi-styles: 4.3.0 @@ -22453,6 +22971,8 @@ snapshots: ws@8.17.1: {} + ws@8.18.0: {} + ws@8.18.3: {} ws@8.19.0: {} @@ -22538,6 +23058,19 @@ snapshots: yoctocolors-cjs@2.1.3: {} + youch-core@0.3.3: + dependencies: + '@poppinss/exception': 1.2.3 + error-stack-parser-es: 1.0.5 + + youch@4.1.0-beta.10: + dependencies: + '@poppinss/colors': 4.1.6 + '@poppinss/dumper': 0.6.5 + '@speed-highlight/core': 1.2.15 + cookie: 1.1.1 + youch-core: 0.3.3 + z-schema@6.0.2: dependencies: lodash.get: 4.4.2