diff --git a/architecture/architecture-overview.mdx b/architecture/architecture-overview.mdx index a77cc605..5259e1a4 100644 --- a/architecture/architecture-overview.mdx +++ b/architecture/architecture-overview.mdx @@ -4,7 +4,7 @@ description: "The core components of PowerSync are the service and client SDKs" --- - + The [PowerSync Service](/architecture/powersync-service) and client SDK operate in unison to keep client-side SQLite databases in sync with a backend database. Learn about their architecture: diff --git a/architecture/client-architecture.mdx b/architecture/client-architecture.mdx index e5f41368..0900900a 100644 --- a/architecture/client-architecture.mdx +++ b/architecture/client-architecture.mdx @@ -19,13 +19,13 @@ A developer configures [Sync Rules](/usage/sync-rules) for their PowerSync insta The PowerSync Service connects directly to the backend database and uses a change stream to hydrate dynamic data partitions, called [sync buckets](/usage/sync-rules/organize-data-into-buckets). Sync buckets are used to partition data according to the configured Sync Rules. (In most use cases, only a subset of data is required in a client's database and not a copy of the entire backend database.) - + The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the backend database, based on the [Sync Rules](/usage/sync-rules) configured by the developer: - + #### Writing Data @@ -35,7 +35,7 @@ Client-side data modifications, namely updates, deletes and inserts, are persist Each entry in the queue is processed by writing the entry to your existing backend application API, using a function [defined by you](/installation/client-side-setup/integrating-with-your-backend) (the developer). This is to ensure that existing backend business logic is honored when uploading data changes. For more information, see the section on [integrating with your backend](/installation/client-side-setup/integrating-with-your-backend). - + ### Schema @@ -67,5 +67,5 @@ Most rows will be present in at least two tables — the `ps_data__` tabl The copy in `ps_oplog` may be newer than the one in `ps_data__
`. Only when a full checkpoint has been downloaded, will the data be copied over to the individual tables. If multiple rows with the same table and id has been synced, only one will be preserved (the one with the highest `op_id`). - If you run into limitations with the above JSON-based SQLite view system, check out [this experimental feature](/usage/use-case-examples/raw-tables) which allows you to define and manage raw SQLite tables to work around some limitations. We are actively seeking feedback about this functionality. + If you run into limitations with the above JSON-based SQLite view system, check out [the Raw Tables experimental feature](/usage/use-case-examples/raw-tables) which allows you to define and manage raw SQLite tables to work around some limitations. We are actively seeking feedback about this functionality. \ No newline at end of file diff --git a/architecture/powersync-protocol.mdx b/architecture/powersync-protocol.mdx index 08c4ea81..89b466a9 100644 --- a/architecture/powersync-protocol.mdx +++ b/architecture/powersync-protocol.mdx @@ -67,4 +67,4 @@ Write checkpoints are used to ensure clients have synced their own changes back Creating a write checkpoint is a separate operation, which is performed by the client after all data has been uploaded. It is important that this happens after the data has been written to the backend source database. -The server then keeps track of the current CDC stream position on the database (LSN in Postgres, resume token in MongoDB, or binlog position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream. +The server then keeps track of the current CDC stream position on the database (LSN in Postgres, resume token in MongoDB, or GTID + Binlog Position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream. diff --git a/client-sdk-references/dotnet.mdx b/client-sdk-references/dotnet.mdx index 7dde5e43..6aedca7b 100644 --- a/client-sdk-references/dotnet.mdx +++ b/client-sdk-references/dotnet.mdx @@ -5,6 +5,7 @@ sidebarTitle: Overview --- import DotNetInstallation from '/snippets/dotnet/installation.mdx'; +import DotNetWatch from '/snippets/dotnet/basic-watch-query.mdx'; @@ -276,21 +277,7 @@ Console.WriteLine(await db.Get("SELECT powersync_rs_version();")); Console.WriteLine(await db.GetAll("SELECT * FROM lists;")); // Use db.Watch() to watch queries for changes (await is used to wait for initialization): -await db.Watch("select * from lists", null, new WatchHandler -{ - OnResult = (results) => - { - Console.WriteLine("Results: "); - foreach (var result in results) - { - Console.WriteLine(result.id + ":" + result.name); - } - }, - OnError = (error) => - { - Console.WriteLine("Error: " + error.Message); - } -}); + // And db.Execute for inserts, updates and deletes: await db.Execute( @@ -323,3 +310,7 @@ var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions Logger = logger }); ``` + +## Supported Platforms + +See [Supported Platforms -> .NET SDK](/resources/supported-platforms#net-sdk). diff --git a/client-sdk-references/flutter.mdx b/client-sdk-references/flutter.mdx index c1747745..e42d246d 100644 --- a/client-sdk-references/flutter.mdx +++ b/client-sdk-references/flutter.mdx @@ -6,6 +6,7 @@ sidebarTitle: Overview import SdkFeatures from '/snippets/sdk-features.mdx'; import FlutterInstallation from '/snippets/flutter/installation.mdx'; +import FlutterWatch from '/snippets/flutter/basic-watch-query.mdx'; @@ -327,36 +328,7 @@ Future> getLists() async { The [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) method executes a read query whenever a change to a dependent table is made. -```dart lib/widgets/todos_widget.dart {13-17} -import 'package:flutter/material.dart'; -import '../main.dart'; -import '../models/todolist.dart'; - -// Example Todos widget -class TodosWidget extends StatelessWidget { - const TodosWidget({super.key}); - - @override - Widget build(BuildContext context) { - return StreamBuilder( - // You can watch any SQL query - stream: db - .watch('SELECT * FROM lists ORDER BY created_at, id') - .map((results) { - return results.map(TodoList.fromRow).toList(growable: false); - }), - builder: (context, snapshot) { - if (snapshot.hasData) { - // TODO: implement your own UI here based on the result set - return ...; - } else { - return const Center(child: CircularProgressIndicator()); - } - }, - ); - } -} -``` + ### Mutations (PowerSync.execute) @@ -401,3 +373,7 @@ See [Flutter ORM Support](/client-sdk-references/flutter/flutter-orm-support) fo ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. + +## Supported Platforms + +See [Supported Platforms -> Dart SDK](/resources/supported-platforms#dart-sdk). \ No newline at end of file diff --git a/client-sdk-references/flutter/usage-examples.mdx b/client-sdk-references/flutter/usage-examples.mdx index 3b60e0bb..33c610e9 100644 --- a/client-sdk-references/flutter/usage-examples.mdx +++ b/client-sdk-references/flutter/usage-examples.mdx @@ -3,6 +3,8 @@ title: "Usage Examples" description: "Code snippets and guidelines for common scenarios" --- +import FlutterWatch from '/snippets/flutter/basic-watch-query.mdx'; + ## Using transactions to group changes Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). @@ -26,20 +28,7 @@ Also see [readTransaction(callback)](https://pub.dev/documentation/powersync/lat Use [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) to watch for changes to the dependent tables of any SQL query. -```dart -StreamBuilder( - // You can watch any SQL query - stream: db.watch('SELECT * FROM customers order by id asc'), - builder: (context, snapshot) { - if (snapshot.hasData) { - // TODO: implement your own UI here based on the result set - return ...; - } else { - return const Center(child: CircularProgressIndicator()); - } - }, -) -``` + ## Insert, update, and delete data in the local database diff --git a/client-sdk-references/javascript-web.mdx b/client-sdk-references/javascript-web.mdx index e10f9568..8e11a510 100644 --- a/client-sdk-references/javascript-web.mdx +++ b/client-sdk-references/javascript-web.mdx @@ -6,6 +6,8 @@ sidebarTitle: "Overview" import SdkFeatures from '/snippets/sdk-features.mdx'; import JavaScriptWebInstallation from '/snippets/javascript-web/installation.mdx'; +import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; +import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; @@ -219,21 +221,16 @@ export const getLists = async () => { The [watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made. -```js -// Watch changes to lists -const abortController = new AbortController(); - -export const function watchLists = (onUpdate) => { - for await (const update of PowerSync.watch( - 'SELECT * from lists', - [], - { signal: abortController.signal } - ) - ) { - onUpdate(update); - } -} -``` + + + + + + + + + +For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). ### Mutations (PowerSync.execute, PowerSync.writeTransaction) @@ -406,4 +403,8 @@ See [JavaScript ORM Support](/client-sdk-references/javascript-web/javascript-or ## Troubleshooting -See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. \ No newline at end of file +See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. + +## Supported Platforms + +See [Supported Platforms -> JS/Web SDK](/resources/supported-platforms#js%2Fweb-sdk). \ No newline at end of file diff --git a/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx b/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx index 23e2e8c1..f16b2e25 100644 --- a/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx +++ b/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx @@ -37,6 +37,10 @@ The main hooks available are: * `useSuspenseQuery`: This hook also allows you to access the results of a watched query, but its loading and fetching states are handled through [Suspense](https://react.dev/reference/react/Suspense). It automatically converts certain loading/fetching states into Suspense signals, triggering Suspense boundaries in parent components. + +For advanced watch query features like incremental updates and differential results for React Hooks, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). + + The full API Reference and example code can be found here: @@ -93,6 +97,10 @@ The main hooks available are: * `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. + +For advanced watch query features like incremental updates and differential results for Vue Hooks, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). + + The full API Reference and example code can be found here: \ No newline at end of file diff --git a/client-sdk-references/javascript-web/usage-examples.mdx b/client-sdk-references/javascript-web/usage-examples.mdx index d52e802f..8b954406 100644 --- a/client-sdk-references/javascript-web/usage-examples.mdx +++ b/client-sdk-references/javascript-web/usage-examples.mdx @@ -3,6 +3,9 @@ title: "Usage Examples" description: "Code snippets and guidelines for common scenarios" --- +import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; +import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; + ## Multiple Tab Support @@ -108,34 +111,16 @@ Also see [PowerSyncDatabase.readTransaction(callback)](https://powersync-ja.gith Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables. -The `watch` method can be used with a `AsyncIterable` signature as follows: - -```js -async *attachmentIds(): AsyncIterable { - for await (const result of this.powersync.watch( - `SELECT photo_id as id FROM ${TODO_TABLE} WHERE photo_id IS NOT NULL`, - [] - )) { - yield result.rows?._array.map((r) => r.id) ?? []; - } -} -``` - -As of version **1.3.3** of the SDK, the `watch` method can also be used with a callback: + + + + + + + + -```js -attachmentIds(onResult: (ids: string[]) => void): void { - this.powersync.watch( - `SELECT photo_id as id FROM ${TODO_TABLE} WHERE photo_id IS NOT NULL`, - [], - { - onResult: (result) => { - onResult(result.rows?._array.map((r) => r.id) ?? []); - } - } - ); -} -``` +For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). ## Insert, update, and delete data in the local database diff --git a/client-sdk-references/kotlin-multiplatform.mdx b/client-sdk-references/kotlin-multiplatform.mdx index 33453fe5..1ab216f6 100644 --- a/client-sdk-references/kotlin-multiplatform.mdx +++ b/client-sdk-references/kotlin-multiplatform.mdx @@ -5,6 +5,7 @@ sidebarTitle: Overview import SdkFeatures from '/snippets/sdk-features.mdx'; import KotlinMultiplatformInstallation from '/snippets/kotlin-multiplatform/installation.mdx'; +import KotlinWatch from '/snippets/kotlin-multiplatform/basic-watch-query.mdx'; @@ -242,21 +243,7 @@ suspend fun getLists(): List { The `watch` method executes a read query whenever a change to a dependent table is made. -```kotlin -// You can watch any SQL query -fun watchCustomers(): Flow> { - // TODO: implement your UI based on the result set - return database.watch( - "SELECT * FROM customers" - ) { cursor -> - User( - id = cursor.getString("id"), - name = cursor.getString("name"), - email = cursor.getString("email") - ) - } -} -``` + ### Mutations (PowerSync.execute) @@ -358,33 +345,6 @@ These instructions are decoded by our SDKs, and on Kotlin there are two implemen To enable the Rust client, pass `SyncOptions(newClientImplementation = true)` as a second parameter when [connecting](https://powersync-ja.github.io/powersync-kotlin/core/com.powersync/-power-sync-database/connect.html). -### Connection Methods - -This SDK supports two methods for streaming sync commands: - -1. **WebSocket (currently experimental)** - - The implementation leverages RSocket for handling reactive socket streams. - - Back-pressure is effectively managed through client-controlled command requests. - - Sync commands are transmitted efficiently as BSON (binary) documents. - - Enabling websocket support requires the new client implementation. -2. **HTTP Streaming (default)** - - This is the original implementation method. - - This method sends data as text (JSON) instead of BSON. - -By default, the `PowerSyncDatabase.connect()` method uses HTTP streams. -You can optionally specify the `connectionMethod` to override this: - -```Kotlin -// HTTP streaming (default) -powerSync.connect(connector) - -// WebSockets (experimental) -powerSync.connect(connector, SyncOptions( - newClientImplementation = true, - method = ConnectionMethod.WebSocket(), -)) -``` - ## ORM Support ORM support is not yet available, we are still investigating options to integrate the SDK with Room and SQLDelight. @@ -393,3 +353,7 @@ Please [let us know](/resources/contact-us) what your needs around ORMs are. ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. + +## Supported Platforms + +See [Supported Platforms -> Kotlin SDK](/resources/supported-platforms#kotlin-sdk). \ No newline at end of file diff --git a/client-sdk-references/kotlin-multiplatform/usage-examples.mdx b/client-sdk-references/kotlin-multiplatform/usage-examples.mdx index 928ece5f..c733859b 100644 --- a/client-sdk-references/kotlin-multiplatform/usage-examples.mdx +++ b/client-sdk-references/kotlin-multiplatform/usage-examples.mdx @@ -3,6 +3,8 @@ title: "Usage Examples" description: "Code snippets and guidelines for common scenarios" --- +import KotlinWatch from '/snippets/kotlin-multiplatform/basic-watch-query.mdx'; + ## Using transactions to group changes Use `writeTransaction` to group statements that can write to the database. @@ -24,19 +26,7 @@ database.writeTransaction { Use the `watch` method to watch for changes to the dependent tables of any SQL query. -```kotlin -// You can watch any SQL query -fun watchCustomers(): Flow> { - // TODO: implement your UI based on the result set - return database.watch("SELECT * FROM customers", mapper = { cursor -> - User( - id = cursor.getString("id"), - name = cursor.getString("name"), - email = cursor.getString("email") - ) - }) -} -``` + ## Insert, update, and delete data in the local database diff --git a/client-sdk-references/node.mdx b/client-sdk-references/node.mdx index eab80810..08eae55e 100644 --- a/client-sdk-references/node.mdx +++ b/client-sdk-references/node.mdx @@ -1,11 +1,13 @@ --- -title: "Node.js client (alpha)" +title: "Node.js client (Beta)" description: "SDK reference for using PowerSync in Node.js clients." sidebarTitle: Overview --- import SdkFeatures from '/snippets/sdk-features.mdx'; import NodeInstallation from '/snippets/node/installation.mdx'; +import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; +import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; This page describes the PowerSync _client_ SDK for Node.js. @@ -26,13 +28,13 @@ import NodeInstallation from '/snippets/node/installation.mdx'; Full API reference for the PowerSync SDK [\[External link\].](https://powersync-ja.github.io/powersync-js/node-sdk) - + Gallery of example projects/demo apps built with Node.js and PowerSync. - This SDK is currently in an [**alpha** release](/resources/feature-status). It is not suitable for production use as breaking changes may still occur. + This SDK is currently in a [**beta** release](/resources/feature-status) and can be considered production-ready for tested use cases. ### SDK Features @@ -147,7 +149,7 @@ await db.waitForFirstSync(); // Optional, to wait for a complete snapshot of dat ## Usage After connecting the client database, it is ready to be used. The API to run queries and updates is identical to our -[web SDK](/client-sdk-references/javascript-web#using-powersync%3A-crud-functions): +[JavaScript/Web SDK](/client-sdk-references/javascript-web#using-powersync%3A-crud-functions): ```js // Use db.get() to fetch a single row: @@ -156,14 +158,6 @@ console.log(await db.get('SELECT powersync_rs_version();')); // Or db.getAll() to fetch all: console.log(await db.getAll('SELECT * FROM lists;')); -// Use db.watch() to watch queries for changes: -const watchLists = async () => { - for await (const rows of db.watch('SELECT * FROM lists;')) { - console.log('Has todo lists', rows.rows!._array); - } -}; -watchLists(); - // And db.execute for inserts, updates and deletes: await db.execute( "INSERT INTO lists (id, created_at, name, owner_id) VALUEs (uuid(), datetime('now'), ?, uuid());", @@ -171,8 +165,23 @@ await db.execute( ); ``` -PowerSync runs queries asynchronously on a background pool of workers and automatically configures WAL to -allow a writer and multiple readers to operate in parallel. +### Watch Queries + +The `db.watch()` method executes a read query whenever a change to a dependent table is made. + + + + + + + + + + +For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). + + +PowerSync runs queries asynchronously on a background pool of workers and automatically configures WAL to allow a writer and multiple readers to operate in parallel. ## Configure Logging @@ -191,4 +200,8 @@ logger.setLevel(LogLevel.DEBUG); Enable verbose output in the developer tools for detailed logs. - \ No newline at end of file + + +## Supported Platforms + +See [Supported Platforms -> Node.js SDK](/resources/supported-platforms#node-js-sdk). \ No newline at end of file diff --git a/client-sdk-references/react-native-and-expo.mdx b/client-sdk-references/react-native-and-expo.mdx index f34445d8..cd1772f5 100644 --- a/client-sdk-references/react-native-and-expo.mdx +++ b/client-sdk-references/react-native-and-expo.mdx @@ -6,6 +6,8 @@ sidebarTitle: "Overview" import SdkFeatures from '/snippets/sdk-features.mdx'; import ReactNativeInstallation from '/snippets/react-native/installation.mdx'; +import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; +import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; @@ -309,37 +311,16 @@ export const ListsWidget = () => { The [watch](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made. It can be used with an `AsyncGenerator`, or with a callback. -```js ListsWidget.jsx -import { FlatList, Text } from 'react-native'; -import { powersync } from "../powersync/system"; - -export const ListsWidget = () => { - const [lists, setLists] = React.useState([]); - - React.useEffect(() => { - const abortController = new AbortController(); - - // Option 1: Use with AsyncGenerator - (async () => { - for await(const update of powersync.watch('SELECT * from lists', [], {signal: abortController.signal})) { - setLists(update) - } - })(); - - // Option 2: Use a callback (available since version 1.3.3 of the SDK) - powersync.watch('SELECT * from lists', [], { onResult: (result) => setLists(result) }, { signal: abortController.signal }); - - return () => { - abortController.abort(); - } - }, []); + + + + + + + + - return ( ({ key: list.id, ...list }))} - renderItem={({ item }) => {item.name}} - />) -} -``` +For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). ### Mutations (PowerSync.execute) @@ -506,3 +487,7 @@ See [JavaScript ORM Support](/client-sdk-references/javascript-web/javascript-or ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. + +## Supported Platforms + +See [Supported Platforms -> React Native SDK](/resources/supported-platforms#react-native-sdk). \ No newline at end of file diff --git a/client-sdk-references/react-native-and-expo/expo-go-support.mdx b/client-sdk-references/react-native-and-expo/expo-go-support.mdx new file mode 100644 index 00000000..c38a026d --- /dev/null +++ b/client-sdk-references/react-native-and-expo/expo-go-support.mdx @@ -0,0 +1,154 @@ +--- +title: "Expo Go Support" +description: "PowerSync supports Expo Go with @powersync/adapter-sql-js" +--- + +Expo Go is a sandbox environment that allows you to quickly test your application without building a development build. To enable PowerSync in Expo Go, we provide a JavaScript-based database adapter: [`@powersync/adapter-sql-js`](https://www.npmjs.com/package/@powersync/adapter-sql-js). + +# @powersync/adapter-sql-js + +`@powersync/adapter-sql-js` is a development package for PowerSync which uses SQL.js to provide a pure JavaScript SQLite implementation. This eliminates the need for native dependencies and enables development with Expo Go and other JavaScript-only environments. Under the hood, it uses our custom fork [powersync-sql-js](https://github.com/powersync-ja/powersync-sql-js) - a fork of SQL.js (SQLite compiled to JavaScript via Emscripten) that loads PowerSync's Rust core extension. + + + This package is in an **alpha** release. + + **Expo Go Sandbox Environment Only** This adapter is specifically designed for Expo Go and similar JavaScript-only environments. It will be much slower than native database adapters and has limitations. Every write operation triggers a complete rewrite of the entire database file to persistent storage, not just the changed data. In addition to the performance overheads, this adapter doesn't provide any of the SQLite consistency guarantees - you may end up with missing data or a corrupted database file if the app is killed while writing to the database file. + + +## Usage + + + + +### Quickstart + +1. Create a new Expo app: +```bash +npx create-expo-app@latest my-app +``` + +2. Navigate to your app directory and start the development server: +```bash +cd my-app && npm run ios +``` + +3. In a new terminal tab, install PowerSync dependencies: +```bash +npm install @powersync/react-native @powersync/adapter-sql-js +``` + +4. Replace the code in `app/(tabs)/index.tsx` with: + +```tsx app/(tabs)/index.tsx +import { SQLJSOpenFactory } from "@powersync/adapter-sql-js"; +import { PowerSyncDatabase, Schema } from "@powersync/react-native"; +import { useEffect, useState } from "react"; +import { Text } from "react-native"; + +export const powerSync = new PowerSyncDatabase({ + schema: new Schema({}), // todo: define the schema - see Next Steps below + database: new SQLJSOpenFactory({ + dbFilename: "example.db", + }), +}); + +export default function HomeScreen() { + const [version, setVersion] = useState(null); + + useEffect(() => { + powerSync.get("select powersync_rs_version();").then((r) => {setVersion(JSON.stringify(r))}); + }, []); + + return ( + <>{version && PowerSync Initialized - {version}} + ); +} +``` + + + + +1. Install the SQL.js adapter: +```bash +npm install @powersync/adapter-sql-js +``` + +2. Set up PowerSync by using the Sql.js factory: + +```tsx SystemProvider.tsx +import { SQLJSOpenFactory } from "@powersync/adapter-sql-js"; +import { PowerSyncDatabase, Schema } from "@powersync/react-native"; +import { useEffect, useState } from "react"; +import { Text } from "react-native"; + +export const powerSync = new PowerSyncDatabase({ + schema: new Schema({}), // todo: define the schema - see Next Steps below + database: new SQLJSOpenFactory({ + dbFilename: "example.db", + }), +}); + +export default function HomeScreen() { + const [version, setVersion] = useState(null); + + useEffect(() => { + powerSync.get("select powersync_rs_version();").then((r) => {setVersion(JSON.stringify(r))}); + }, []); + + return ( + <>{version && PowerSync Initialized - {version}} + ); +} +``` + + + + +## Next Steps + +After adding PowerSync to your app: + +1. [**Define what data to sync by setting up Sync Rules**](/usage/sync-rules) +2. [**Implement your SQLite client schema**](/client-sdk-references/react-native-and-expo#1-define-the-schema) +3. [**Connect to PowerSync and your backend**](/client-sdk-references/react-native-and-expo#3-integrate-with-your-backend) + +## Data Persistence + +The default version of this adapter uses in-memory persistence, but you can specify your own `persister` option to the open factory. +See an example in the package [README](https://www.npmjs.com/package/@powersync/adapter-sql-js). + +## Moving Beyond Expo Go + +When you're ready to move beyond the Expo Go sandbox environment - whether for native development builds or production deployment - we recommend switching to our native database adapters: + +- [OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) (Recommended) - Offers built-in encryption support and better React Native New Architecture compatibility +- [React Native Quick SQLite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) - Our original native adapter + + + These database adapters cannot run in Expo Go because they require native code compilation. Specifically, PowerSync needs a SQLite implementation that can load our [Rust core extension](https://github.com/powersync-ja/powersync-sqlite-core), which isn't possible in Expo Go's prebuilt app container. + + +These adapters provide better performance, full SQLite consistency guarantees, and are suitable for both development builds and production deployment. See the SDKs [Installation](/client-sdk-references/react-native-and-expo#install-peer-dependencies) details for setup instructions. + +### Switching Between Adapters - Example + +If you want to keep using Expo Go alongside development and production builds, you can switch between different adapters based on the Expo `executionEnvironment`: + +```js SystemProvider.tsx +import { SQLJSOpenFactory } from "@powersync/adapter-sql-js"; +import { PowerSyncDatabase } from "@powersync/react-native"; +import Constants from "expo-constants"; + +const isExpoGo = Constants.executionEnvironment === "storeClient"; + +export const powerSync = new PowerSyncDatabase({ + schema: AppSchema, + database: isExpoGo + ? new SQLJSOpenFactory({ + dbFilename: "app.db", + }) + : { + dbFilename: "sqlite.db", + }, +}); +``` \ No newline at end of file diff --git a/client-sdk-references/react-native-and-expo/usage-examples.mdx b/client-sdk-references/react-native-and-expo/usage-examples.mdx index 88fc7c10..68191af2 100644 --- a/client-sdk-references/react-native-and-expo/usage-examples.mdx +++ b/client-sdk-references/react-native-and-expo/usage-examples.mdx @@ -3,6 +3,9 @@ title: "Usage Examples" description: "Code snippets and guidelines for common scenarios" --- +import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; +import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; + ## Using Hooks A separate `powersync-react` package is available containing React hooks for PowerSync: @@ -83,34 +86,16 @@ Also see [PowerSyncDatabase.readTransaction(callback)](https://powersync-ja.gith Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables. -The `watch` method can be used with a `AsyncIterable` signature as follows: - -```js -async *attachmentIds(): AsyncIterable { - for await (const result of this.powersync.watch( - `SELECT photo_id as id FROM ${TODO_TABLE} WHERE photo_id IS NOT NULL`, - [] - )) { - yield result.rows?._array.map((r) => r.id) ?? []; - } -} -``` - -As of version **1.3.3** of the SDK, the `watch` method can also be used with a callback: + + + + + + + + -```js -attachmentIds(onResult: (ids: string[]) => void): void { - this.powersync.watch( - `SELECT photo_id as id FROM ${TODO_TABLE} WHERE photo_id IS NOT NULL`, - [], - { - onResult: (result) => { - onResult(result.rows?._array.map((r) => r.id) ?? []); - } - } - ); -} -``` +For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). ## Insert, update, and delete data in the local database diff --git a/client-sdk-references/swift.mdx b/client-sdk-references/swift.mdx index 1a868971..9128fe7a 100644 --- a/client-sdk-references/swift.mdx +++ b/client-sdk-references/swift.mdx @@ -5,6 +5,7 @@ sidebarTitle: "Overview" import SdkFeatures from '/snippets/sdk-features.mdx'; import SwiftInstallation from '/snippets/swift/installation.mdx'; +import SwiftWatch from '/snippets/swift/basic-watch-query.mdx'; @@ -222,29 +223,7 @@ func getLists() async throws { The `watch` method executes a read query whenever a change to a dependent table is made. -```swift -// You can watch any SQL query -func watchLists(_ callback: @escaping (_ lists: [ListContent]) -> Void ) async { - do { - for try await lists in try self.db.watch( - sql: "SELECT * FROM \(LISTS_TABLE)", - parameters: [], - mapper: { cursor in - try ListContent( - id: cursor.getString(name: "id"), - name: cursor.getString(name: "name"), - createdAt: cursor.getString(name: "created_at"), - ownerId: cursor.getString(name: "owner_id") - ) - } - ) { - callback(lists) - } - } catch { - print("Error in watch: \(error)") - } -} -``` + ### Mutations (PowerSync.execute) @@ -300,4 +279,8 @@ ORM support is not yet available, we are still investigating options. Please [le ## Troubleshooting -See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. \ No newline at end of file +See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. + +## Supported Platforms + +See [Supported Platforms -> Swift SDK](/resources/supported-platforms#swift-sdk). \ No newline at end of file diff --git a/client-sdk-references/swift/usage-examples.mdx b/client-sdk-references/swift/usage-examples.mdx index 168df7a9..3a494ee8 100644 --- a/client-sdk-references/swift/usage-examples.mdx +++ b/client-sdk-references/swift/usage-examples.mdx @@ -3,6 +3,8 @@ title: "Usage Examples" description: "Code snippets and guidelines for common scenarios in Swift" --- +import SwiftWatch from '/snippets/swift/basic-watch-query.mdx'; + ## Using transactions to group changes Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). @@ -23,29 +25,7 @@ Also see [`readTransaction`](https://powersync-ja.github.io/powersync-swift/docu Use `watch` to watch for changes to the dependent tables of any SQL query. -```swift -// Watch for changes to the lists table -func watchLists(_ callback: @escaping (_ lists: [ListContent]) -> Void ) async { - do { - for try await lists in try self.db.watch( - sql: "SELECT * FROM \(LISTS_TABLE)", - parameters: [], - mapper: { cursor in - try ListContent( - id: cursor.getString(name: "id"), - name: cursor.getString(name: "name"), - createdAt: cursor.getString(name: "created_at"), - ownerId: cursor.getString(name: "owner_id") - ) - } - ) { - callback(lists) - } - } catch { - print("Error in watch: \(error)") - } -} -``` + ## Insert, update, and delete data in the local database diff --git a/docs.json b/docs.json index 9f110723..b52c5198 100644 --- a/docs.json +++ b/docs.json @@ -184,6 +184,8 @@ "usage/use-case-examples/data-encryption", "usage/use-case-examples/full-text-search", "usage/use-case-examples/infinite-scrolling", + "usage/use-case-examples/watch-queries", + "usage/use-case-examples/high-performance-diffs", "usage/use-case-examples/offline-only-usage", "usage/use-case-examples/postgis", "usage/use-case-examples/prioritized-sync", @@ -277,6 +279,7 @@ "icon": "react", "pages": [ "client-sdk-references/react-native-and-expo", + "client-sdk-references/react-native-and-expo/expo-go-support", "client-sdk-references/react-native-and-expo/react-native-web-support", "client-sdk-references/react-native-and-expo/javascript-orm-support", "client-sdk-references/react-native-and-expo/usage-examples", @@ -363,6 +366,8 @@ "self-hosting/lifecycle-maintenance/securing-your-deployment", "self-hosting/lifecycle-maintenance/healthchecks", "self-hosting/lifecycle-maintenance/telemetry", + "self-hosting/lifecycle-maintenance/metrics", + "self-hosting/lifecycle-maintenance/diagnostics", "self-hosting/lifecycle-maintenance/migrating", "self-hosting/lifecycle-maintenance/multiple-instances" ] @@ -456,8 +461,8 @@ }, "resources/ai-tools", "resources/performance-and-limits", + "resources/supported-platforms", "resources/feature-status", - "resources/supported-hardware", "resources/faq", "resources/local-first-software", "resources/release-notes", @@ -639,6 +644,14 @@ { "source": "/self-hosting/telemetry", "destination": "/self-hosting/lifecycle-maintenance/telemetry" + }, + { + "source": "/self-hosting/metrics", + "destination": "/self-hosting/lifecycle-maintenance/metrics" + }, + { + "source": "/self-hosting/diagnostics", + "destination": "/self-hosting/lifecycle-maintenance/diagnostics" } ] } diff --git a/images/architecture/powersync-docs-diagram-architecture-overview.png b/images/architecture/powersync-docs-diagram-architecture-overview.png old mode 100644 new mode 100755 index 26265567..010dd911 Binary files a/images/architecture/powersync-docs-diagram-architecture-overview.png and b/images/architecture/powersync-docs-diagram-architecture-overview.png differ diff --git a/images/architecture/powersync-docs-diagram-client-architecture-001.png b/images/architecture/powersync-docs-diagram-client-architecture-001.png new file mode 100755 index 00000000..8e1eaf80 Binary files /dev/null and b/images/architecture/powersync-docs-diagram-client-architecture-001.png differ diff --git a/images/architecture/powersync-docs-diagram-client-architecture-002.png b/images/architecture/powersync-docs-diagram-client-architecture-002.png new file mode 100755 index 00000000..698366da Binary files /dev/null and b/images/architecture/powersync-docs-diagram-client-architecture-002.png differ diff --git a/images/architecture/powersync-docs-diagram-client-architecture-003.png b/images/architecture/powersync-docs-diagram-client-architecture-003.png new file mode 100755 index 00000000..88531da6 Binary files /dev/null and b/images/architecture/powersync-docs-diagram-client-architecture-003.png differ diff --git a/images/authentication/powersync-docs-diagram-authentication-setup-001.png b/images/authentication/powersync-docs-diagram-authentication-setup-001.png new file mode 100755 index 00000000..a17e6a1e Binary files /dev/null and b/images/authentication/powersync-docs-diagram-authentication-setup-001.png differ diff --git a/images/authentication/powersync-docs-diagram-authentication-setup-002.png b/images/authentication/powersync-docs-diagram-authentication-setup-002.png new file mode 100755 index 00000000..6aa57c25 Binary files /dev/null and b/images/authentication/powersync-docs-diagram-authentication-setup-002.png differ diff --git a/images/authentication/powersync-docs-diagram-authentication-setup-003.png b/images/authentication/powersync-docs-diagram-authentication-setup-003.png new file mode 100755 index 00000000..62ed5acf Binary files /dev/null and b/images/authentication/powersync-docs-diagram-authentication-setup-003.png differ diff --git a/images/authentication/powersync-docs-diagram-authentication-setup-004.png b/images/authentication/powersync-docs-diagram-authentication-setup-004.png new file mode 100755 index 00000000..d63f6233 Binary files /dev/null and b/images/authentication/powersync-docs-diagram-authentication-setup-004.png differ diff --git a/images/installation/aurora-create-parameter-group.png b/images/installation/aurora-create-parameter-group.png new file mode 100644 index 00000000..e17aa964 Binary files /dev/null and b/images/installation/aurora-create-parameter-group.png differ diff --git a/images/installation/aurora-set-gtid-flags.png b/images/installation/aurora-set-gtid-flags.png new file mode 100644 index 00000000..94b424ec Binary files /dev/null and b/images/installation/aurora-set-gtid-flags.png differ diff --git a/images/installation/powersync-docs-diagram-app-backend-setup.png b/images/installation/powersync-docs-diagram-app-backend-setup.png new file mode 100755 index 00000000..da9ef7f4 Binary files /dev/null and b/images/installation/powersync-docs-diagram-app-backend-setup.png differ diff --git a/images/integration-1.png b/images/integration-1.png deleted file mode 100644 index 493bc400..00000000 Binary files a/images/integration-1.png and /dev/null differ diff --git a/images/integration-20.png b/images/integration-20.png deleted file mode 100644 index dece8991..00000000 Binary files a/images/integration-20.png and /dev/null differ diff --git a/images/integration-21.png b/images/integration-21.png deleted file mode 100644 index d2ec8775..00000000 Binary files a/images/integration-21.png and /dev/null differ diff --git a/images/integration-33.png b/images/integration-33.png deleted file mode 100644 index f5aae810..00000000 Binary files a/images/integration-33.png and /dev/null differ diff --git a/images/integration-54.avif b/images/integration-54.avif deleted file mode 100644 index 461e552e..00000000 Binary files a/images/integration-54.avif and /dev/null differ diff --git a/images/integration-guides/supabase/diagram-docs-supabase-integration.png b/images/integration-guides/supabase/diagram-docs-supabase-integration.png deleted file mode 100644 index 975c878a..00000000 Binary files a/images/integration-guides/supabase/diagram-docs-supabase-integration.png and /dev/null differ diff --git a/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png b/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png new file mode 100755 index 00000000..299222d7 Binary files /dev/null and b/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png differ diff --git a/images/overview-1.webp b/images/overview-1.webp deleted file mode 100644 index c6fe8910..00000000 Binary files a/images/overview-1.webp and /dev/null differ diff --git a/images/overview-2.webp b/images/overview-2.webp deleted file mode 100644 index f0106783..00000000 Binary files a/images/overview-2.webp and /dev/null differ diff --git a/images/overview-3.webp b/images/overview-3.webp deleted file mode 100644 index 26156807..00000000 Binary files a/images/overview-3.webp and /dev/null differ diff --git a/images/overview-4.webp b/images/overview-4.webp deleted file mode 100644 index 35e4b1ad..00000000 Binary files a/images/overview-4.webp and /dev/null differ diff --git a/images/overview-5.webp b/images/overview-5.webp deleted file mode 100644 index de90388a..00000000 Binary files a/images/overview-5.webp and /dev/null differ diff --git a/images/overview-6.webp b/images/overview-6.webp deleted file mode 100644 index 243a40ee..00000000 Binary files a/images/overview-6.webp and /dev/null differ diff --git a/images/overview-7.webp b/images/overview-7.webp deleted file mode 100644 index f6b7f148..00000000 Binary files a/images/overview-7.webp and /dev/null differ diff --git a/images/powersync-architecture-diagram-self-host.png b/images/powersync-architecture-diagram-self-host.png deleted file mode 100644 index fc659c67..00000000 Binary files a/images/powersync-architecture-diagram-self-host.png and /dev/null differ diff --git a/images/powersync-diagram-architecture-overview.png b/images/powersync-diagram-architecture-overview.png deleted file mode 100644 index 6259ac2b..00000000 Binary files a/images/powersync-diagram-architecture-overview.png and /dev/null differ diff --git a/images/powersync-docs-app-backend-setup-diagram (2).png b/images/powersync-docs-app-backend-setup-diagram (2).png deleted file mode 100644 index 90c6978c..00000000 Binary files a/images/powersync-docs-app-backend-setup-diagram (2).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-001 (1).png b/images/powersync-docs-architecture-diagram-001 (1).png deleted file mode 100644 index 2b2cb4ce..00000000 Binary files a/images/powersync-docs-architecture-diagram-001 (1).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-002 (2).png b/images/powersync-docs-architecture-diagram-002 (2).png deleted file mode 100644 index fe97c458..00000000 Binary files a/images/powersync-docs-architecture-diagram-002 (2).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-004 (2).png b/images/powersync-docs-architecture-diagram-004 (2).png deleted file mode 100644 index 1bc68fa6..00000000 Binary files a/images/powersync-docs-architecture-diagram-004 (2).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-005 (2).png b/images/powersync-docs-architecture-diagram-005 (2).png deleted file mode 100644 index a36a8a14..00000000 Binary files a/images/powersync-docs-architecture-diagram-005 (2).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-006 (2).png b/images/powersync-docs-architecture-diagram-006 (2).png deleted file mode 100644 index 4f0e58e8..00000000 Binary files a/images/powersync-docs-architecture-diagram-006 (2).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-007 (2).png b/images/powersync-docs-architecture-diagram-007 (2).png deleted file mode 100644 index c840426d..00000000 Binary files a/images/powersync-docs-architecture-diagram-007 (2).png and /dev/null differ diff --git a/images/powersync-docs-architecture-diagram-008 (3).png b/images/powersync-docs-architecture-diagram-008 (3).png deleted file mode 100644 index 21c50713..00000000 Binary files a/images/powersync-docs-architecture-diagram-008 (3).png and /dev/null differ diff --git a/images/powersync-docs-sync-rules-diagram-001 (2).png b/images/powersync-docs-sync-rules-diagram-001 (2).png deleted file mode 100644 index f0bc2df5..00000000 Binary files a/images/powersync-docs-sync-rules-diagram-001 (2).png and /dev/null differ diff --git a/images/powersync-docs-sync-rules-diagram-001 (3).png b/images/powersync-docs-sync-rules-diagram-001 (3).png deleted file mode 100644 index 32367a6b..00000000 Binary files a/images/powersync-docs-sync-rules-diagram-001 (3).png and /dev/null differ diff --git a/images/powersync-docs-sync-rules-diagram-002.png b/images/powersync-docs-sync-rules-diagram-002.png deleted file mode 100644 index 79fb30f5..00000000 Binary files a/images/powersync-docs-sync-rules-diagram-002.png and /dev/null differ diff --git a/images/powersync-docs-sync-rules-diagram-003.png b/images/powersync-docs-sync-rules-diagram-003.png deleted file mode 100644 index e74b522e..00000000 Binary files a/images/powersync-docs-sync-rules-diagram-003.png and /dev/null differ diff --git a/images/resources-1.png b/images/resources-1.png deleted file mode 100644 index 2291ff72..00000000 Binary files a/images/resources-1.png and /dev/null differ diff --git a/images/resources-2.png b/images/resources-2.png deleted file mode 100644 index c484f33b..00000000 Binary files a/images/resources-2.png and /dev/null differ diff --git a/images/resources-3.png b/images/resources-3.png deleted file mode 100644 index 0d6484db..00000000 Binary files a/images/resources-3.png and /dev/null differ diff --git a/images/resources-5.avif b/images/resources-5.avif deleted file mode 100644 index db45fd2f..00000000 Binary files a/images/resources-5.avif and /dev/null differ diff --git a/images/self-hosting/powersync-architecture-diagram-self-host.png b/images/self-hosting/powersync-architecture-diagram-self-host.png index af279b68..fc659c67 100644 Binary files a/images/self-hosting/powersync-architecture-diagram-self-host.png and b/images/self-hosting/powersync-architecture-diagram-self-host.png differ diff --git a/images/usage-10.webp b/images/usage-10.webp deleted file mode 100644 index f2630f00..00000000 Binary files a/images/usage-10.webp and /dev/null differ diff --git a/images/usage-12.png b/images/usage-12.png deleted file mode 100644 index 52af51fa..00000000 Binary files a/images/usage-12.png and /dev/null differ diff --git a/images/usage-13.avif b/images/usage-13.avif deleted file mode 100644 index b4a66c3a..00000000 Binary files a/images/usage-13.avif and /dev/null differ diff --git a/images/usage-15.avif b/images/usage-15.avif deleted file mode 100644 index 451e7a80..00000000 Binary files a/images/usage-15.avif and /dev/null differ diff --git a/images/usage-16.avif b/images/usage-16.avif deleted file mode 100644 index 99d30a3e..00000000 Binary files a/images/usage-16.avif and /dev/null differ diff --git a/images/usage-8.webp b/images/usage-8.webp deleted file mode 100644 index 290f6d45..00000000 Binary files a/images/usage-8.webp and /dev/null differ diff --git a/images/usage-9.webp b/images/usage-9.webp deleted file mode 100644 index 68159bf8..00000000 Binary files a/images/usage-9.webp and /dev/null differ diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-001.png b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-001.png new file mode 100755 index 00000000..06ebcd23 Binary files /dev/null and b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-001.png differ diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-002.png b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-002.png new file mode 100755 index 00000000..d287dbb1 Binary files /dev/null and b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-002.png differ diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-003.png b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-003.png new file mode 100755 index 00000000..2bf322c2 Binary files /dev/null and b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-003.png differ diff --git a/images/usage/use-case-prioritized.png b/images/usage/use-case-prioritized.png deleted file mode 100644 index 5014659a..00000000 Binary files a/images/usage/use-case-prioritized.png and /dev/null differ diff --git a/installation/app-backend-setup.mdx b/installation/app-backend-setup.mdx index b8e4cc06..b59b87eb 100644 --- a/installation/app-backend-setup.mdx +++ b/installation/app-backend-setup.mdx @@ -11,7 +11,7 @@ When you integrate PowerSync into your app project, PowerSync relies on that "ba 2. **Authentication integration:** Your backend is responsible for securely generating the [JWTs](/installation/authentication-setup) used by the PowerSync Client SDK to authenticate with the [PowerSync Service](/architecture/powersync-service). - + diff --git a/installation/authentication-setup.mdx b/installation/authentication-setup.mdx index 2f79975d..becf9343 100644 --- a/installation/authentication-setup.mdx +++ b/installation/authentication-setup.mdx @@ -9,25 +9,25 @@ PowerSync clients (i.e. apps used by your users that embed the PowerSync Client Before using PowerSync, an application's existing architecture may look like this: - + The [PowerSync Service](/architecture/powersync-service) uses database native credentials and authenticates directly against the [backend database](/installation/database-setup) using the configured credentials: - + When the PowerSync client SDK is included in an app project, it uses [existing app-to-backend](/installation/app-backend-setup) authentication to [retrieve a JSON Web Token (JWT)](/installation/authentication-setup): - + The PowerSync client SDK uses the retrieved JWT to authenticate directly against the PowerSync Service: - + Users are not persisted in PowerSync, and there is no server-to-server communication used for client authentication. diff --git a/installation/client-side-setup.mdx b/installation/client-side-setup.mdx index 1d7542d3..6ca1fe4c 100644 --- a/installation/client-side-setup.mdx +++ b/installation/client-side-setup.mdx @@ -78,7 +78,7 @@ PowerSync offers a variety of client SDKs. Please see the steps based on your ap - + See the full SDK reference for further details and getting started instructions: diff --git a/installation/client-side-setup/define-your-schema.mdx b/installation/client-side-setup/define-your-schema.mdx index 0664bb69..e5c763af 100644 --- a/installation/client-side-setup/define-your-schema.mdx +++ b/installation/client-side-setup/define-your-schema.mdx @@ -42,7 +42,7 @@ For an example implementation of the client-side schema, see the _Getting Starte * [1\. Define the Schema](/client-sdk-references/swift#1-define-the-schema) -### Node.js (alpha) +### Node.js (beta) * [1\. Define the Schema](/client-sdk-references/node#1-define-the-schema) diff --git a/installation/client-side-setup/instantiate-powersync-database.mdx b/installation/client-side-setup/instantiate-powersync-database.mdx index 1ea3ee42..e9abc12d 100644 --- a/installation/client-side-setup/instantiate-powersync-database.mdx +++ b/installation/client-side-setup/instantiate-powersync-database.mdx @@ -31,7 +31,7 @@ For an example implementation of instantiating the client-side database, see the * [2\. Instantiate the PowerSync Database](/client-sdk-references/swift#2-instantiate-the-powersync-database) -### Node.js (alpha) +### Node.js (beta) * [2\. Instantiate the PowerSync Database](/client-sdk-references/node#2-instantiate-the-powersync-database) diff --git a/installation/client-side-setup/integrating-with-your-backend.mdx b/installation/client-side-setup/integrating-with-your-backend.mdx index 0064c37b..a711452d 100644 --- a/installation/client-side-setup/integrating-with-your-backend.mdx +++ b/installation/client-side-setup/integrating-with-your-backend.mdx @@ -33,7 +33,7 @@ For an example implementation of a PowerSync 'backend connector', see the _Getti * [3\. Integrate with your Backend](/client-sdk-references/javascript-web#3-integrate-with-your-backend) -### Node.js (alpha) +### Node.js (beta) * [3\. Integrate with your Backend](/client-sdk-references/node#3-integrate-with-your-backend) diff --git a/installation/database-connection.mdx b/installation/database-connection.mdx index 19767f1a..2b33bde0 100644 --- a/installation/database-connection.mdx +++ b/installation/database-connection.mdx @@ -162,15 +162,50 @@ Also see: - [MongoDB Atlas Device Sync Migration Guide](/migration-guides/mongodb-atlas) - [MongoDB Setup](/installation/database-setup#mongodb) -## MySQL (Alpha) Specifics +## MySQL (Beta) Specifics -1. Fill in your connection details from MySQL: - 1. "**Name**" can be any name for the connection. - 2. "**Host**" and "**Database name**" is the database to replicate. - 3. "**Username**" and "**Password**" maps to your database user. -2. Click **"Test Connection"** and fix any errors. If have any issues connecting, reach out to our support engineers on our [Discord server](https://discord.gg/powersync) or otherwise [contact us](/resources/contact-us). - 1. Make sure that your database allows access to PowerSync's IPs — see [Security and IP Filtering](/installation/database-setup/security-and-ip-filtering) -3. Click **"Save".** +Select your MySQL hosting provider for steps to connect your newly-created PowerSync instance to your MySQL database: + + +To enable binary logging and GTID replication in AWS Aurora, you need to create a [DB Parameter Group](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html) +and configure it with the necessary parameters. Follow these steps: + +1. Navigate to [Amazon RDS console](https://console.aws.amazon.com/rds/) + In the navigation pane, choose Parameter groups and click on `Create Parameter Group`. + + + +2. Add all the required [binlog configuration](/installation/database-setup#binlog-configuration) parameters. For example: + + + +3. Associate your newly created parameter group with your Aurora cluster: + 1. In the navigation pane, choose Databases. + 2. Select your Aurora cluster. + 3. Choose Modify. + 4. In the DB Parameter Group section, select the parameter group you created. + 5. Click Continue and then Apply immediately. +4. Whitelist PowerSync's IPs in your Aurora cluster's security group to allow access. See [Security and IP Filtering](/installation/database-setup/security-and-ip-filtering) for more details. +5. + -PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. +For other providers and self-hosted databases: + +Fill in your MySQL connection details: + 1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" are required. + 2. "**Name**" can be any name for the connection. + 3. "**Host**" the endpoint for your database. + 4. "**Database name**" is the default database to replicate. Additional databases are derived by qualifying the tables in the sync rules. + 5. "**Username**" and "**Password**" maps to your database user. + 6. If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". + 7. Click **"Test Connection"** and fix any errors. + 8. Click **"Save".** + + PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. + + + + + Make sure that your MySQL database allows access to PowerSync's IPs — see [Security and IP Filtering](/installation/database-setup/security-and-ip-filtering) + diff --git a/installation/database-setup.mdx b/installation/database-setup.mdx index 0b9459bd..4dad4330 100644 --- a/installation/database-setup.mdx +++ b/installation/database-setup.mdx @@ -4,7 +4,7 @@ description: "Configure your backend database for PowerSync, including permissio sidebarTitle: "Overview" --- -Jump to: [Postgres](#postgres) | [MongoDB](#mongodb) | [MySQL](#mysql-alpha) +Jump to: [Postgres](#postgres) | [MongoDB](#mongodb) | [MySQL](#mysql-beta) import PostgresPowerSyncUser from '/snippets/postgres-powersync-user.mdx'; import PostgresPowerSyncPublication from '/snippets/postgres-powersync-publication.mdx'; @@ -54,7 +54,7 @@ We have documented steps for some hosting providers: ![](/images/setup-2.png) - + ### 2. Create a PowerSync database user Create a PowerSync user on Postgres: @@ -62,10 +62,10 @@ We have documented steps for some hosting providers: ```sql -- SQL to create powersync user CREATE ROLE powersync_role WITH BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; - + -- Allow the role to perform replication tasks GRANT rds_replication TO powersync_role; - + -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; @@ -168,10 +168,10 @@ We have documented steps for some hosting providers: ```sql -- Create a publication to replicate tables. -- PlanetScale does not support ON ALL TABLES so - -- Specify each table you want to sync + -- Specify each table you want to sync -- The publication must be named "powersync" CREATE PUBLICATION powersync - FOR TABLE public.lists, public.todos; + FOR TABLE public.lists, public.todos; ``` @@ -195,11 +195,11 @@ For other providers and self-hosted databases: ```sql -- Check the replication type - + SHOW wal_level; - + -- Ensure logical replication is enabled - + ALTER SYSTEM SET wal_level = logical; ``` @@ -253,14 +253,13 @@ readAnyDatabase@admin For self-hosted MongoDB, or for creating custom roles on MongoDB Atlas, PowerSync requires the following privileges/granted actions: -- On the database being replicated: `listCollections` -- On all collections in the database: `changeStream` - - This must apply to the entire database, not individual collections. Specify `collection: ""` - - If replicating from multiple databases, this must apply to the entire cluster. Specify `db: ""` -- On each collection being replicated: `find` -- On the `_powersync_checkpoints` collection: `createCollection`, `dropCollection`, `find`, `changeStream`, `insert`, `update`, and `remove` +- `listCollections`: This privilege must be granted on the database being replicated. +- `find`: This privilege must be granted either at the database level or on specific collections. +- `changeStream`: This privilege must be granted at the database level (not on individual collections). In MongoDB Atlas, set `collection: ""` or check `Apply to any collection` in MongoDB Atlas if you want to apply this privilege on any collection. + - If replicating from multiple databases, this must apply to the entire cluster. Specify `db: ""` or check `Apply to any database` in MongoDB Atlas. +- For the `_powersync_checkpoints` collection add the following privileges: `createCollection`, `dropCollection`, `find`, `changeStream`, `insert`, `update`, and `remove` - To allow PowerSync to automatically enable [`changeStreamPreAndPostImages`](#post-images) on - replicated collections, additionally add the `collMod` permission on all replicated collections. + replicated collections, additionally add the `collMod` permission on the database and all collections being replicated. ### Post-Images @@ -307,40 +306,77 @@ If you need to use private endpoints with MongoDB Atlas, see [Private Endpoints] For more information on migrating from Atlas Device Sync to PowerSync, see our [migration guide](/migration-guides/mongodb-atlas). -## MySQL (Alpha) - - - This section is a work in progress. More details for MySQL connections are coming soon. In the meantime, ask on our [Discord server](https://discord.gg/powersync) if you have any questions. - +## MySQL (Beta) **Version compatibility**: PowerSync requires MySQL version 5.7 or greater. -MySQL connections use the [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) to replicate changes. +PowerSync reads from the MySQL [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) to replicate changes. We use a modified version of the [Zongji MySQL](https://github.com/powersync-ja/powersync-mysql-zongji) binlog listener to achieve this. + +### Binlog Configuration -Generally, this requires the following config: +To ensure that PowerSync can read the binary log, you need to configure your MySQL server to enable binary logging and configure it with the following server command options: +- [server_id](https://dev.mysql.com/doc/refman/8.4/en/replication-options.html#sysvar_server_id): Uniquely identifies the MySQL server instance in the replication topology. Default value is **1**. +- [log_bin](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_log_bin): **ON**. Enables binary logging. Default is **ON** for MySQL 8.0 and later, but **OFF** for MySQL 5.7. +- [enforce_gtid_consistency](https://dev.mysql.com/doc/refman/8.4/en/replication-options-gtids.html#sysvar_enforce_gtid_consistency): **ON**. Enforces GTID consistency. Default is **OFF**. +- [gtid_mode](https://dev.mysql.com/doc/refman/8.4/en/replication-options-gtids.html#sysvar_gtid_mode): **ON**. Enables GTID based logging. Default is **OFF**. +- [binlog_format](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_binlog_format): **ROW**. Sets the binary log format to row-based replication. This is required for PowerSync to correctly replicate changes. Default is **ROW**. -- `gtid_mode` : `ON` -- `enforce_gtid_consistency` : `ON` -- `binlog_format` : `ROW` +These can be specified in a MySQL [option file](https://dev.mysql.com/doc/refman/8.4/en/option-files.html): +``` +server_id= +log_bin=ON +enforce_gtid_consistency=ON +gtid_mode=ON +binlog_format=ROW +``` -PowerSync also requires a user with replication permissions on the database. An example: +### Database User Configuration +PowerSync also requires a MySQL user with **REPLICATION** and **SELECT** permission on the source databases. These can be added by running the following SQL commands: ```sql -- Create a user with necessary privileges -CREATE USER 'repl_user'@'%' IDENTIFIED BY 'good_password'; +CREATE USER 'repl_user'@'%' IDENTIFIED BY ''; -- Grant replication client privilege GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl_user'@'%'; --- Grant access to the specific database -GRANT ALL PRIVILEGES ON powersync.* TO 'repl_user'@'%'; +-- Grant select access to the specific database +GRANT SELECT ON .* TO 'repl_user'@'%'; + +-- Apply changes +FLUSH PRIVILEGES; +``` + +It is possible to constrain the MySQL user further and limit access to specific tables. Care should be taken to ensure that all the tables in the sync rules are included in the grants. +```sql +-- Grant select to the users and the invoices tables in the source database +GRANT SELECT ON .users TO 'repl_user'@'%'; +GRANT SELECT ON .invoices TO 'repl_user'@'%'; -- Apply changes FLUSH PRIVILEGES; ``` +### Additional Configuration (optional) +#### Binlog + +The binlog can be configured to limit logging to specific databases. By default, events for tables in all the databases on the MySQL server will be logged. +- [binlog-do-db](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#option_mysqld_binlog-do-db): Only updates for tables in the specified database will be logged. +- [binlog-ignore-db](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#option_mysqld_binlog-ignore-db): No updates for tables in the specified database will be logged. + +Examples: +``` +# Only row events for tables in the user_db and invoices_db databases will appear in the binlog. +binlog-do-db=user_db +binlog-do-db=invoices_db +``` +``` +# Row events for tables in the user_db will be ignored. Events for any other database will be logged. +binlog-ignore-db=user_db +``` + ## Next Step Next, connect PowerSync to your database: @@ -352,4 +388,4 @@ Next, connect PowerSync to your database: Refer to **PowerSync Service Setup** in the Self-Hosting section. - \ No newline at end of file + diff --git a/installation/quickstart-guide.mdx b/installation/quickstart-guide.mdx index d5324153..7c195945 100644 --- a/installation/quickstart-guide.mdx +++ b/installation/quickstart-guide.mdx @@ -3,13 +3,13 @@ title: "Quickstart Guide / Installation Overview" sidebarTitle: "Quickstart / Overview" --- -PowerSync is designed to be stack agnostic, and currently supports [Postgres](/installation/database-setup#postgres), [MongoDB](/installation/database-setup#mongodb) and [MySQL](/installation/database-setup#mysql-alpha) (alpha) as the backend source database, and has the following official client-side SDKs available: +PowerSync is designed to be stack agnostic, and currently supports [Postgres](/installation/database-setup#postgres), [MongoDB](/installation/database-setup#mongodb) and [MySQL](/installation/database-setup#mysql-beta) (Beta) as the backend source database, and has the following official client-side SDKs available: - [**Flutter**](/client-sdk-references/flutter) (mobile and [web](/client-sdk-references/flutter/flutter-web-support)) - [**React Native**](/client-sdk-references/react-native-and-expo) (mobile and [web](/client-sdk-references/react-native-and-expo/react-native-web-support)) - [**JavaScript Web**](/client-sdk-references/javascript-web) (including integrations for React & Vue) - [**Kotlin Multiplatform**](/client-sdk-references/kotlin-multiplatform) - [**Swift**](/client-sdk-references/swift) -- [**Node.js**](/client-sdk-references/node) (alpha) +- [**Node.js**](/client-sdk-references/node) (beta) - [**.NET**](/client-sdk-references/dotnet) (alpha) diff --git a/integration-guides/flutterflow-+-powersync.mdx b/integration-guides/flutterflow-+-powersync.mdx index 0d839d9b..5eb700a1 100644 --- a/integration-guides/flutterflow-+-powersync.mdx +++ b/integration-guides/flutterflow-+-powersync.mdx @@ -734,4 +734,5 @@ Below is a list of known issues and limitations. 4. After removing that option, clean the build folder and build the project again. 5. You should now be able to submit to the App Store. 2. Exporting the code from FlutterFlow using the "Download Code" action in FlutterFlow requires the same workaround listed above. -3. Other common issues and troubleshooting techniques are documented here: [Troubleshooting](/resources/troubleshooting). \ No newline at end of file +3. The PowerSync FlutterFlow Library does not currently support [encryption at rest](/usage/use-case-examples/data-encryption). +4. Other common issues and troubleshooting techniques are documented here: [Troubleshooting](/resources/troubleshooting). \ No newline at end of file diff --git a/integration-guides/supabase-+-powersync.mdx b/integration-guides/supabase-+-powersync.mdx index 0e7c6a02..37a49811 100644 --- a/integration-guides/supabase-+-powersync.mdx +++ b/integration-guides/supabase-+-powersync.mdx @@ -35,7 +35,7 @@ For web apps, this guide assumes that you have [pnpm](https://pnpm.io/installati Upon successful integration of Supabase + PowerSync, your system architecture will look like this: (click to enlarge image) - + The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Supabase Postgres database (based on configured sync rules as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Supabase client library when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database. diff --git a/intro/powersync-overview.mdx b/intro/powersync-overview.mdx index 0ee06b39..b8af79eb 100644 --- a/intro/powersync-overview.mdx +++ b/intro/powersync-overview.mdx @@ -22,7 +22,7 @@ PowerSync is designed to be backend database agnostic, and currently supports: - + ### Supported Client SDKs diff --git a/migration-guides/mongodb-atlas.mdx b/migration-guides/mongodb-atlas.mdx index 8c5ebe91..1c54bd3f 100644 --- a/migration-guides/mongodb-atlas.mdx +++ b/migration-guides/mongodb-atlas.mdx @@ -158,7 +158,7 @@ Here is an example of a client-side schema for PowerSync using a simple `todos` ``` ```typescript TypeScript - Node.js - // Our Node.js SDK is currently in an alpha release + // Our Node.js SDK is currently in a beta release import { column, Schema, Table } from '@powersync/node'; const todos = new Table( @@ -311,7 +311,7 @@ Now that we have our Sync Rules and client-side schema defined, we can instantia ``` ```typescript TypeScript - Node.js - // Our Node.js SDK is currently in an alpha release + // Our Node.js SDK is currently in a beta release import { PowerSyncDatabase } from '@powersync/node'; import { Connector } from './Connector'; import { AppSchema } from './Schema'; @@ -502,7 +502,7 @@ The same applies to writing data: `INSERT`, `UPDATE` and `DELETE` statements are #### Live queries -PowerSync supports "live queries" or "watch queries" which automatically refresh when data in the SQLite database is updated (e.g. as a result of syncing from the server). This allows for real-time reactivity of your app UI. See the [Client SDK documentation](/client-sdk-references/introduction) for your specific platform for more details. +PowerSync supports "live queries" or "watch queries" which automatically refresh when data in the SQLite database is updated (e.g. as a result of syncing from the server). This allows for real-time reactivity of your app UI. See the [Live Queries/Watch Queries](/usage/use-case-examples/watch-queries) page for more details. ### 8. Accept uploads on the backend diff --git a/resources/demo-apps-example-projects.mdx b/resources/demo-apps-example-projects.mdx index 38057360..bca2e02e 100644 --- a/resources/demo-apps-example-projects.mdx +++ b/resources/demo-apps-example-projects.mdx @@ -76,7 +76,7 @@ This page showcases example projects organized by platform and backend technolog * [Vite with Encryption](https://github.com/powersync-ja/powersync-js/blob/main/demos/example-vite-encryption/README.md) - Web database encryption demo - + #### Examples * [CLI Example](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-node) - Node.js CLI client connecting to PowerSync and running live queries @@ -86,8 +86,6 @@ This page showcases example projects organized by platform and backend technolog #### Supabase Backend - * [Hello PowerSync](https://github.com/powersync-ja/powersync-kotlin/tree/main/demos/hello-powersync) - Minimal starter app - * Supports Android, iOS, and Desktop (JVM) targets * [To-Do List App](https://github.com/powersync-ja/powersync-kotlin/tree/main/demos/supabase-todolist) * Supports Android, iOS, and Desktop (JVM) targets * Includes a guide for [implementing background sync on Android](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/docs/BackgroundSync.md) @@ -148,22 +146,32 @@ This page showcases example projects organized by platform and backend technolog * [Supabase (Postgres) + Local Development](https://github.com/powersync-ja/self-host-demo/tree/main/demos/supabase) * [Django](https://github.com/powersync-ja/self-host-demo/tree/main/demos/django) + + +## Notable Community Projects - - #### Custom Backends + + This is a list of projects we've spotted from community members 🙌 These projects haven't necessarily been vetted by us. + +Browse the Community GitHub Org for a collection of community-based starter projects, templates, demos and other projects to help you succeed with PowerSync: + * https://github.com/powersync-community + +Other community projects: + + * Laravel Backend * https://github.com/IsmailAshour/powersync-laravel-backend + - #### Flutter Projects - + * Flutter Instagram Clone with Supabase + Firebase * https://github.com/Gambley1/flutter-instagram-offline-first-clone * Jepsen PowerSync Testing - Formal consistency validation framework * https://github.com/nurturenature/jepsen-powersync - - #### JavaScript & TypeScript Projects + + * SolidJS Hooks for PowerSync Queries * https://github.com/aboviq/powersync-solid * Effect + Kysely + Stytch Integration @@ -173,6 +181,8 @@ This page showcases example projects organized by platform and backend technolog * Expo Web Integration * https://github.com/ImSingee/powersync-web-workers * Note: Our [React Native Web support](/client-sdk-references/react-native-and-expo/react-native-web-support) now eliminates the need to patch the `@powersync/web` module + * Attachments library for Node.js + * https://www.npmjs.com/package/@muhammedv/powersync-attachments-for-node diff --git a/resources/feature-status.mdx b/resources/feature-status.mdx index 3fa23d15..633decdf 100644 --- a/resources/feature-status.mdx +++ b/resources/feature-status.mdx @@ -45,7 +45,7 @@ Below is a summary of the current main PowerSync features and their release stat | **Category / Item** | **Status** | | -------------------------------------------------- | -------------- | | **Database Connectors** | | -| MySQL | Alpha | +| MySQL | Beta | | MongoDB | V1 | | Postgres | V1 | | | | @@ -56,7 +56,7 @@ Below is a summary of the current main PowerSync features and their release stat | | | | **Client SDKs** | | | .NET SDK | Alpha | -| Node.js SDK | Alpha | +| Node.js SDK | Beta | | Swift SDK | V1 | | Kotlin Multiplatform SDK | V1 | | JavaScript/Web SDK | V1 | diff --git a/resources/supported-hardware.mdx b/resources/supported-hardware.mdx deleted file mode 100644 index 8fd53b03..00000000 --- a/resources/supported-hardware.mdx +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: "Supported Hardware and Operating Systems" -sidebarTitle: "Hardware and OS" ---- - -# Hardware - -## Desktop -**Supported**: minimum of 2GB RAM, Core i3 CPU. - -**Recommended**: 4GB RAM or more. - -## Mobile - -### Android -**Supported**: Minimum of 1.5GB RAM and 1.4GHz dual-core CPU. - -**Recommended**: Minimum of 4GB RAM and 1.4GHz quad-core CPU. Using a device with a recent Android version that receives regular security updates is recommended. - -### iOS -**Supported**: iPhone 7, iPad 4 and newer. - -**Recommended**: iPhone 12, iPad 7 and iPad mini 5 and newer. - -# Operating Systems - -## Android -**Recommended**: The latest 3 Android versions. - -## iOS -**Recommended**: The latest three iOS/iPadOS versions. - -## Windows -**Recommended**: Windows 10 or later. - -## Other -PowerSync is not extensively tested on other operating systems, but we'll work with customers to resolve any issues on a case by case basis. \ No newline at end of file diff --git a/resources/supported-platforms.mdx b/resources/supported-platforms.mdx new file mode 100644 index 00000000..80e542e7 --- /dev/null +++ b/resources/supported-platforms.mdx @@ -0,0 +1,89 @@ +--- +title: "Supported Platforms" +description: "Supported platforms by PowerSync Client SDK" +--- + +## Swift SDK + +| Platform | Supported? | Notes | +| ---------------------------------- | ---------- | --------------------------------------------------------------------------- | +| macOS | Yes | | +| iOS | Yes | | +| watchOS | Yes | watchOS 26 not supported yet | +| iPadOS | Yes | | +| tvOS | No | Planned | +| macOS Catalyst | No | KT-40442 Support building Kotlin/Native for Mac Catalyst (x86-64 and arm64) | +| visionOS | No | KT-59571 Add support for visionOS SDK | +| Non-apple targets (Linux, Windows) | No | No good way to link PowerSync | + +## Kotlin SDK + +| Platform | Supported? | Notes | +| ----------------------- | ------------------------------------------------------------------------------ | --------------------------------------------------------------------------- | +| Android | Yes (x86-64, x86, aarch64, armv7) | | +| Android native | No | | +| iOS | Yes (aarch64 device, x86-64 and aarch64 simulators) | | +| macOS (native) | Yes (x86-64, aarch64) | | +| macOS catalyst (native) | No | KT-40442 Support building Kotlin/Native for Mac Catalyst (x86-64 and arm64) | +| watchOS | Yes (aarch64 device, armv8 32-bit pointers ABI, x86-64 and aarch64 simulators) | | +| tvOS | Yes (aarch64 device, x86-64 and aarch64 simulators) | | +| visionOS | No | KT-59571 Add support for visionOS SDK | +| Windows (JVM) | Yes (x86-64 only) | | +| Linux (JVM) | Yes (x86-64, aarch64) | | +| macOS (JVM) | Yes (x86-64, aarch64) | | +| Linux (native) | No | Maybe soon | +| Windows (native) | No | Maybe soon | +| JS | No | | +| WebAssembly | No | | + +## Dart SDK + +| Platform | Supported? | Notes | +| --------------- | ----------------------------------- | ------------------------------------------------------------- | +| Flutter Android | Yes (x86-64, aarch64, armv7) | | +| Flutter iOS | Yes | | +| Flutter macOS | Yes (x86-64, aarch64) | | +| Flutter Windows | Yes (x86-64 only) | | +| Flutter Linux | Yes (x86-64, aarch64) | | +| Flutter web | Yes | Only dart2js is tested, dart2wasm has issues | +| Dart web | With custom setup | | +| Dart macOS | With custom setup | | +| Dart Windows | With custom setup (x86-64 only) | | +| Dart Linux | With custom setup (x86-64, aarch64) | Dart supports armv7 and riscv64gc as well, we currently don’t | + +## .NET SDK + +| Platform | Supported? | Notes | +| ----------- | ----------------- | ----------------------- | +| WPF | No | Some known build issues | +| MAUI | Yes | | +| Winforms | YMMV - not tested | | +| CLI Windows | Yes | | +| CLI Mac | Yes | | +| Avalonia UI | YMMV - not tested | | + +## React Native SDK + +| Platform | Supported? | Notes | +| ------------------------ | ----------------- | ----- | +| React Native | Yes | | +| React Native w/ Expo | Yes | | +| React Native for Web | Yes | | +| React Strict DOM | YMMV - not tested | | +| React Native for Windows | No | | + +## JS/Web SDK + +| Platform | Supported? | Notes | +| --------------------- | ---------- | ---------------------------------- | +| Chrome & Chrome-based | Yes | See VFS notes | +| Firefox | Yes | OPFS Not supported in private tabs | +| Safari | Yes | OPFS Not supported in private tabs | + +## Node.js SDK + +| Platform | Supported? | Notes | +| -------- | ---------- | ----- | +| macOS | Yes | | +| Linux | Yes | | +| Windows | Yes | | \ No newline at end of file diff --git a/self-hosting/installation.mdx b/self-hosting/installation.mdx index 266e65ce..465267dd 100644 --- a/self-hosting/installation.mdx +++ b/self-hosting/installation.mdx @@ -8,7 +8,7 @@ sidebarTitle: Overview The typical components of a self-hosted production environment are: - ![](/images/powersync-architecture-diagram-self-host.png) + ![](/images/self-hosting/powersync-architecture-diagram-self-host.png) The self-hosted deployment is run via Docker. A Docker image is distributed via [Docker Hub](https://hub.docker.com/r/journeyapps/powersync-service). Run PowerSync using: diff --git a/self-hosting/installation/client-side-setup.mdx b/self-hosting/installation/client-side-setup.mdx index cf12eaff..27db4a1e 100644 --- a/self-hosting/installation/client-side-setup.mdx +++ b/self-hosting/installation/client-side-setup.mdx @@ -22,9 +22,7 @@ The recommended approach is to initially use a short-lived development token and 2. Add the key to your PowerSync Service configuration file, e.g.: -```yaml -# config.yaml - +```yaml config.yaml client_auth: # static collection of public keys for JWT verification jwks: @@ -51,7 +49,7 @@ import * as jose from 'jose'; // get this from .env const powerSyncPrivateKey = { alg: 'RS256', - k: '[secret]' + k: '[secret]', ... }; @@ -108,9 +106,7 @@ If you are using Supabase or Firebase authentication, PowerSync can verify JWTs Under `client_auth` in your config file, enable Supabase authentication: -```yaml -# config.yaml - +```yaml config.yaml client_auth: # Enable this if using Supabase Auth supabase: true @@ -127,9 +123,7 @@ Under `client_auth` in your config file, add your Firebase JWKS URI and audience * JWT Audience: Your Firebase project ID -```yaml -# config.yaml - +```yaml config.yaml client_auth: # JWKS URIs can be specified here. jwks_uri: @@ -147,9 +141,7 @@ Refer to: [Custom](/installation/authentication-setup/custom) PowerSync supports both RS256 and HS256. Insert your auth details into your configuration file: -```yaml -# config.yaml - +```yaml config.yaml client_auth: # JWKS URIs can be specified here. jwks_uri: http://demo-backend:6060/api/auth/keys @@ -164,4 +156,4 @@ client_auth: # kid: '${PS_JWK_KID}' audience: ['powersync-dev', 'powersync'] -``` \ No newline at end of file +``` diff --git a/self-hosting/installation/powersync-service-setup.mdx b/self-hosting/installation/powersync-service-setup.mdx index 74b0c391..d00752d9 100644 --- a/self-hosting/installation/powersync-service-setup.mdx +++ b/self-hosting/installation/powersync-service-setup.mdx @@ -106,9 +106,7 @@ The config file schema is also available here: Below is a skeleton config file you can copy and paste to edit locally: -```yaml -# config.yaml - +```yaml config.yaml # Settings for source database replication replication: # Specify database connection details @@ -210,9 +208,7 @@ See examples here: Your project's [sync rules](/self-hosting/installation/powersync-service-setup#sync-rules) can either be specified within your configuration file directly, or in a separate file that is referenced. -```yaml -# config.yaml - +```yaml config.yaml # Define sync rules: sync_rules: content: | @@ -229,7 +225,7 @@ sync_rules: We recommend starting with syncing a single table in a [global bucket](/usage/sync-rules/example-global-data). Choose a table and sync it by adding the following to your sync rules: -```yaml +```yaml config.yaml sync_rules: content: | bucket_definitions: @@ -255,4 +251,4 @@ For more information about sync rules see: # Example docker exec -it self-host-demo-mongo-1 mongosh "mongodb://localhost:27017/powersync_demo" --eval "db.bucket_data.find().pretty()" ``` - \ No newline at end of file + diff --git a/self-hosting/lifecycle-maintenance/diagnostics.mdx b/self-hosting/lifecycle-maintenance/diagnostics.mdx new file mode 100644 index 00000000..38c85dfc --- /dev/null +++ b/self-hosting/lifecycle-maintenance/diagnostics.mdx @@ -0,0 +1,31 @@ +--- +title: "Diagnostics" +description: "How to use the PowerSync Service Diagnostics API" +--- + +All self-hosted PowerSync Service instances ship with a Diagnostics API. +This API provides the following diagnostic information: + +- Connections → Connected source backend database and any active errors associated with the connection. +- Active Sync Rules → Currently deployed sync rules and the status of the sync rules. + +# Configuration + +1. To enable the Diagnostics API, specify an API token in your PowerSync YAML file: + +```yaml powersync.yaml +api: + tokens: + - YOUR_API_TOKEN +``` +Make sure to use a secure API token as part of this configuration + +2. Restart the PowerSync service. + +3. Once configured, send an HTTP request to your PowerSync Service Diagnostics API endpoint. Include the API token set in step 1 as a Bearer token in the Authorization header. + +```shell +curl -X POST http://localhost:8080/api/admin/v1/diagnostics \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer YOUR_API_TOKEN" +``` diff --git a/self-hosting/lifecycle-maintenance/metrics.mdx b/self-hosting/lifecycle-maintenance/metrics.mdx new file mode 100644 index 00000000..a8827139 --- /dev/null +++ b/self-hosting/lifecycle-maintenance/metrics.mdx @@ -0,0 +1,34 @@ +--- +title: "Metrics" +description: "Managing and using the PowerSync Service Metrics" +--- + +# Metrics Endpoint + +PowerSync exposes instance metrics via a Prometheus-compatible endpoint. This allows you to integrate with Prometheus or other monitoring systems that scrape Prometheus endpoints. + +It's not recommended to scrape the Prometheus endpoint manually, we suggest using Prometheus or other compatible tools. PowerSync does not currently support pushing to OpenTelemetry collectors. + +### Configuration + +1. To enable metrics, update your PowerSync YAML file to include the `prometheus_port` and set a port number. + +```yaml powersync.yaml +telemetry: + # Set the port at which the Prometheus metrics will be exposed + prometheus_port: 9090 +``` + +2. Update your Docker compose file to forward the `prometheus_port`. + +```yaml docker-compose.yaml +ports: + # Forward port 8080 for the PowerSync service + - 8080:8080 + # Forward port 9090 for Prometheus metrics + - 9090:9090 +``` + +Once enabled, restart the service and the metrics endpoint will return Prometheus-formatted metrics, as described in the [What is Collected](/self-hosting/lifecycle-maintenance/telemetry#whatiscollected) section of the [Telemetry](/self-hosting/lifecycle-maintenance/telemetry) docs. + +If you're running multiple containers (e.g. splitting up replication containers and API containers) you need to scrape the metrics separately for each container. diff --git a/self-hosting/lifecycle-maintenance/telemetry.mdx b/self-hosting/lifecycle-maintenance/telemetry.mdx index 422463ab..7078d789 100644 --- a/self-hosting/lifecycle-maintenance/telemetry.mdx +++ b/self-hosting/lifecycle-maintenance/telemetry.mdx @@ -28,14 +28,13 @@ Below are the data points collected every few minutes and associated with a rand See [https://github.com/powersync-ja/powersync-service/blob/main/packages/service-core/src/metrics/Metrics.ts](https://github.com/powersync-ja/powersync-service/blob/main/packages/service-core/src/metrics/Metrics.ts) for additional details. +To scrape your self-hosted PowerSync Service metrics, please see the [Metrics](/self-hosting/lifecycle-maintenance/metrics) docs page for more details. + ### Opting Out To disable the sending of telemetry to PowerSync, set the `disable_telemetry_sharing` key in your [configuration file](/self-hosting/installation/powersync-service-setup#powersync-configuration) (`config.yaml` or `config.json`) to `true`: -```yaml -// config.yaml -... - +```yaml powersync.yaml telemetry: # Opt out of reporting anonymized usage metrics to PowerSync telemetry service disable_telemetry_sharing: true diff --git a/snippets/basic-watch-query-javascript-async.mdx b/snippets/basic-watch-query-javascript-async.mdx new file mode 100644 index 00000000..2c464ede --- /dev/null +++ b/snippets/basic-watch-query-javascript-async.mdx @@ -0,0 +1,9 @@ +```javascript +async function* pendingLists(): AsyncIterable { + for await (const result of db.watch( + `SELECT * FROM lists WHERE state = ?`, + ['pending'] + )) { + yield result.rows?._array ?? []; + } +} \ No newline at end of file diff --git a/snippets/basic-watch-query-javascript-callback.mdx b/snippets/basic-watch-query-javascript-callback.mdx new file mode 100644 index 00000000..0c373dcb --- /dev/null +++ b/snippets/basic-watch-query-javascript-callback.mdx @@ -0,0 +1,12 @@ +```javascript +const pendingLists = (onResult: (lists: any[]) => void): void => { + db.watch( + 'SELECT * FROM lists WHERE state = ?', + ['pending'], + { + onResult: (result: any) => { + onResult(result.rows?._array ?? []); + } + } + ); +} \ No newline at end of file diff --git a/snippets/client-sdks.mdx b/snippets/client-sdks.mdx index 19dac456..49d8d393 100644 --- a/snippets/client-sdks.mdx +++ b/snippets/client-sdks.mdx @@ -9,7 +9,7 @@ - + diff --git a/snippets/dotnet/basic-watch-query.mdx b/snippets/dotnet/basic-watch-query.mdx new file mode 100644 index 00000000..dc52b99c --- /dev/null +++ b/snippets/dotnet/basic-watch-query.mdx @@ -0,0 +1,17 @@ +```csharp +await db.Watch("SELECT * FROM lists WHERE state = ?", new[] { "pending" }, new WatchHandler +{ + OnResult = (results) => + { + Console.WriteLine("Pending Lists: "); + foreach (var result in results) + { + Console.WriteLine($"{result.id}: {result.name}"); + } + }, + OnError = (error) => + { + Console.WriteLine("Error: " + error.Message); + } +}); +``` \ No newline at end of file diff --git a/snippets/flutter/basic-watch-query.mdx b/snippets/flutter/basic-watch-query.mdx new file mode 100644 index 00000000..56a6b64f --- /dev/null +++ b/snippets/flutter/basic-watch-query.mdx @@ -0,0 +1,13 @@ +```dart +StreamBuilder( + stream: db.watch('SELECT * FROM lists WHERE state = ?', ['pending']), + builder: (context, snapshot) { + if (snapshot.hasData) { + // TODO: implement your own UI here based on the result set + return ...; + } else { + return const Center(child: CircularProgressIndicator()); + } + }, +) +``` \ No newline at end of file diff --git a/snippets/kotlin-multiplatform/basic-watch-query.mdx b/snippets/kotlin-multiplatform/basic-watch-query.mdx new file mode 100644 index 00000000..02a6ed62 --- /dev/null +++ b/snippets/kotlin-multiplatform/basic-watch-query.mdx @@ -0,0 +1,12 @@ +```kotlin +fun watchPendingLists(): Flow> = + db.watch( + "SELECT * FROM lists WHERE state = ?", + listOf("pending"), + ) { cursor -> + ListItem( + id = cursor.getString("id"), + name = cursor.getString("name"), + ) + } +``` \ No newline at end of file diff --git a/snippets/kotlin-multiplatform/installation.mdx b/snippets/kotlin-multiplatform/installation.mdx index 2d12e26f..6796d17f 100644 --- a/snippets/kotlin-multiplatform/installation.mdx +++ b/snippets/kotlin-multiplatform/installation.mdx @@ -5,9 +5,9 @@ kotlin { //... sourceSets { commonMain.dependencies { - api("com.powersync:core:$powersyncVersion") + implementation("com.powersync:core:$powersyncVersion") // If you want to use the Supabase Connector, also add the following: - implementation("com.powersync:connectors:$powersyncVersion") + implementation("com.powersync:connector-supabase:$powersyncVersion") } //... } diff --git a/snippets/react-native/installation.mdx b/snippets/react-native/installation.mdx index 9cf3333d..59d0c6a1 100644 --- a/snippets/react-native/installation.mdx +++ b/snippets/react-native/installation.mdx @@ -1,8 +1,3 @@ - - **PowerSync is not compatible with Expo Go.** - PowerSync uses a native plugin and is therefore only compatible with Expo Dev Builds. - - Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powersync/react-native) to your project: @@ -25,9 +20,45 @@ Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powe -**Required peer dependencies** - -This SDK requires [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) as a peer dependency. Install it as follows: +### Install peer dependencies + +PowerSync requires a SQLite database adapter. + + +**Using Expo Go?** Our native database adapters listed below (OP-SQLite and React Native Quick SQLite) are not compatible with Expo Go's sandbox environment. To run PowerSync with Expo Go install our JavaScript-based adapter `@powersync/adapter-sql-js` instead. See details [here](/client-sdk-references/react-native-and-expo/expo-go-support). + + +Choose between: + +**OP-SQLite (Recommended)** + +[PowerSync OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) offers: +- Built-in encryption support via SQLCipher +- Smoother transition to React Native's New Architecture + + + + ```bash + npx expo install @powersync/op-sqlite @op-engineering/op-sqlite + ``` + + + + ```bash + yarn expo add @powersync/op-sqlite @op-engineering/op-sqlite + ``` + + + + ``` + pnpm expo install @powersync/op-sqlite @op-engineering/op-sqlite + ``` + + + +**React Native Quick SQLite** + +The [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) package is the original database adapter for React Native and therefore more battle-tested in production environments. @@ -49,8 +80,6 @@ This SDK requires [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com -Alternatively, you can install OP-SQLite with the [PowerSync OP-SQLite package](https://github.com/powersync-ja/powersync-js/tree/main/packages/powersync-op-sqlite) which offers [built-in encryption support via SQLCipher](/usage/use-case-examples/data-encryption) and a smoother transition to React Native's New Architecture. - **Polyfills and additional notes:** diff --git a/snippets/supabase-database-connection.mdx b/snippets/supabase-database-connection.mdx index f84b4bd3..af34acee 100644 --- a/snippets/supabase-database-connection.mdx +++ b/snippets/supabase-database-connection.mdx @@ -21,6 +21,10 @@ 8. Click **Next**. 9. PowerSync will detect the Supabase connection and prompt you to enable Supabase auth. To enable it, copy your JWT Secret from your project's settings ([JWT Keys](https://supabase.com/dashboard/project/_/settings/jwt) section in the Supabase dashboard) and paste it here: + +PowerSync is compatible with Supabase's new [JWT signing keys](https://supabase.com/blog/jwt-signing-keys). See this [Discord thread](https://discord.com/channels/1138230179878154300/1194710422960472175/1396878076683485205) for details on how to configure auth on your connection if you are using these keys. + + 10. Click **Enable Supabase auth** to finalize your connection settings. PowerSync will now create an isolated cloud environment for your instance. This typically takes a minute or two. diff --git a/snippets/swift/basic-watch-query.mdx b/snippets/swift/basic-watch-query.mdx new file mode 100644 index 00000000..112fa5ea --- /dev/null +++ b/snippets/swift/basic-watch-query.mdx @@ -0,0 +1,13 @@ +```swift +func watchPendingLists() throws -> AsyncThrowingStream<[ListContent], Error> { + try db.watch( + sql: "SELECT * FROM lists WHERE state = ?", + parameters: ["pending"], + ) { cursor in + try ListContent( + id: cursor.getString(name: "id"), + name: cursor.getString(name: "name"), + ) + } +} +``` diff --git a/usage/lifecycle-maintenance/implementing-schema-changes.mdx b/usage/lifecycle-maintenance/implementing-schema-changes.mdx index 77541872..55c7a39a 100644 --- a/usage/lifecycle-maintenance/implementing-schema-changes.mdx +++ b/usage/lifecycle-maintenance/implementing-schema-changes.mdx @@ -149,14 +149,78 @@ Renaming an unsynced collection to a name that is included in the Sync Rules tri Circular renames (e.g., renaming `todos` → `todos_old` → `todos`) are not directly supported. To reprocess the database after such changes, a Sync Rules update must be deployed. -## MySQL (Alpha) Specifics +## MySQL (Beta) Specifics +PowerSync keeps the [sync buckets](/usage/sync-rules/organize-data-into-buckets) up to date with any incremental data changes as recorded in the MySQL [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html). +The binary log also provides DDL (Data Definition Language) query updates, which include: - - This section is a work in progress. More details for MySQL connections are coming soon. In the meantime, ask on our [Discord server](https://discord.gg/powersync) if you have any questions. - +1. Creating, dropping or renaming tables. + +2. Truncating tables. (Not technically a schema change, but they appear in the query updates regardless.) + +3. Changing replica identity of a table. (Creation, deletion or modification of primary keys, unique indexes, etc.) + +4. Adding, dropping, renaming or changing the types of columns. + +For MySQL, PowerSync detects schema changes by parsing the DDL queries in the binary log. It may not always be possible to parse the DDL queries correctly, especially if they are complex or use non-standard syntax. +In such cases, PowerSync will ignore the schema change, but will log a warning with the schema change query. If required, the schema change would then need to be manually +handled by redeploying the sync rules. This triggers a re-replication. + +### MySQL schema changes affecting Sync Rules + +#### DROP table + +PowerSync will detect when a table is dropped, and automatically remove the data from the sync buckets. + +#### CREATE table + +Table creation is detected and handled the first time row events for the new table appear on the binary log. + +#### TRUNCATE table + +PowerSync will detect truncate statements in the binary log, and consequently remove all data from the sync buckets for that table. + +#### RENAME table + +A renamed table is handled similarly to dropping the old table, and then creating a new table with existing data under the new name. +This may be a slow operation if the table is large, since the "new" table has to be re-replicated. Replication will be blocked until the new table is replicated. + +#### Change REPLICA IDENTITY + +The replica identity of a table is considered to be changed if either: + +1. The type of replica identity changes (`DEFAULT`, `INDEX`, `FULL`, `NOTHING`). + +2. The name or type of columns which form part of the replica identity changes. + +The latter can happen if: + +1. Using `REPLICA IDENTITY FULL`, and any column is added, removed, renamed, or the type changed. + +2. Using `REPLICA IDENTITY DEFAULT`, and the type of any column in the primary key is changed. + +3. Using `REPLICA IDENTITY INDEX`, and the type of any column in the replica index is changed. + +4. The primary key or replica index is removed or changed. + +When the replication identity changes, the entire table is replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again. + +Sync rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes. + +#### Column changes + +Column changes such as adding, dropping, renaming columns, or changing column types, are detected by PowerSync but will generally not result in re-replication. (Unless the replica identity was affected as described above). + +Adding a column with a `NULL` default value will generally not cause issues. Existing records will have a missing value instead of `NULL` value, but those are generally treated the same on the client. + +Adding a column with a different default value, whether it's a static or computed value, will not have this default automatically replicated for existing rows. To propagate this value, make an update to every existing row. + +Removing a column will not have the values automatically removed for existing rows on PowerSync. To propagate the change, make an update to every existing row. + +Changing a column type, and/or changing the default value of a column using an `ALTER TABLE` statement, will not be automatically replicated to PowerSync. +In some cases, the change will have no effect on PowerSync (for example, changing between `VARCHAR` and `TEXT` types). When the values are expected to change, make an update to every existing row to propagate the changes. ## See Also -* [Custom Types, Arrays and JSON](/usage/use-case-examples/custom-types-arrays-and-json) +* [JSON, Arrays and Custom Types](/usage/use-case-examples/custom-types-arrays-and-json) * [Deploying Schema Changes](/usage/lifecycle-maintenance/deploying-schema-changes) \ No newline at end of file diff --git a/usage/lifecycle-maintenance/upgrading-the-client-sdk.mdx b/usage/lifecycle-maintenance/upgrading-the-client-sdk.mdx index 41e01a81..c73e7b41 100644 --- a/usage/lifecycle-maintenance/upgrading-the-client-sdk.mdx +++ b/usage/lifecycle-maintenance/upgrading-the-client-sdk.mdx @@ -57,7 +57,7 @@ pnpm upgrade @powersync/web @journeyapps/wa-sqlite -## Node.js (alpha) +## Node.js (beta) Run the below command in your project folder: diff --git a/usage/sync-rules.mdx b/usage/sync-rules.mdx index 0704346e..9f32d5ba 100644 --- a/usage/sync-rules.mdx +++ b/usage/sync-rules.mdx @@ -19,8 +19,8 @@ This SQL-like syntax is used when connecting to either Postgres, MongoDB or MySQ The [PowerSync Service](/architecture/powersync-service) uses these SQL-like queries to group data into "sync buckets" when replicating data to client devices. - - + + Functionality includes: @@ -37,8 +37,8 @@ PowerSync replicates and transforms relevant data from the backend source databa Data from this step is persisted in separate sync buckets on the PowerSync Service. Data is incrementally updated so that sync buckets always contain current state data as well as a full history of changes. - - + + ## Client Database Hydration @@ -46,7 +46,7 @@ Data from this step is persisted in separate sync buckets on the PowerSync Servi PowerSync asynchronously hydrates local SQLite databases embedded in the PowerSync Client SDK based on data in sync buckets. - + diff --git a/usage/sync-rules/client-id.mdx b/usage/sync-rules/client-id.mdx index 2c5d2c9d..8c324d2e 100644 --- a/usage/sync-rules/client-id.mdx +++ b/usage/sync-rules/client-id.mdx @@ -3,7 +3,10 @@ title: "Client ID" description: "On the client, PowerSync only supports a single primary key column called `id`, of type `text`." --- -For tables where the client will create new rows, we recommend using a UUID for the ID. We provide a helper function `uuid()` to generate a random UUID (v4) on the client. +For tables where the client will create new rows: + +- Postgres and MySQL: use a UUID for `id`. Use the `uuid()` helper to generate a random UUID (v4) on the client. +- MongoDB: use an `ObjectId` for `_id`. Generate an `ObjectId()` in your app code and store it in the client's `id` column as a string; this will map to MongoDB's `_id`. To use a different column/field from the server-side database as the record ID on the client, use a column/field alias in your Sync Rules: @@ -12,7 +15,7 @@ SELECT client_id as id FROM my_data ``` - MongoDB uses `_id` as the name of the ID field in collections. Therefore, PowerSync requires using `SELECT _id as id` in the data queries when [using MongoDB](/installation/database-setup) as the backend source database. + MongoDB uses `_id` as the name of the ID field in collections. Therefore, PowerSync requires using `SELECT _id as id` in [Sync Rule's](/usage/sync-rules) data queries when using MongoDB as the backend source database. When inserting new documents from the client, prefer `ObjectId` values for `_id` (stored in the client's `id` column). Custom transformations could also be used for the ID, for example: @@ -22,6 +25,10 @@ Custom transformations could also be used for the ID, for example: SELECT org_id || '.' || record_id as id FROM my_data ``` + + If you want to upload data to a table with a custom record ID, ensure that `uploadData()` isn't blindly using a field named `id` when handling CRUD operations. See the [Sequential ID mapping tutorial](/tutorials/client/data/sequential-id-mapping#update-client-to-use-uuids) for an example where the record ID is aliased to `uuid` on the backend. + + PowerSync does not perform any validation that IDs are unique. Duplicate IDs on a client could occur in any of these scenarios: 1. A non-unique column is used for the ID. diff --git a/usage/sync-rules/types.mdx b/usage/sync-rules/types.mdx index cf9e2813..b498b0d6 100644 --- a/usage/sync-rules/types.mdx +++ b/usage/sync-rules/types.mdx @@ -17,24 +17,24 @@ Binary data in Postgres can be accessed in Sync Rules, but cannot be synced dire Postgres values are mapped according to this table: -| Postgres Data Type | PowerSync / SQLite Column Type | Notes | -| ------------------ | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| text, varchar | text | | -| int2, int4, int8 | integer | | -| numeric / decimal | text | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite | -| bool | integer | 1 for true, 0 for false | -| float4, float8 | real | | -| enum | text | | -| uuid | text | | +| Postgres Data Type | PowerSync / SQLite Column Type | Notes | +|--------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| text, varchar | text | | +| int2, int4, int8 | integer | | +| numeric / decimal | text | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite | +| bool | integer | 1 for true, 0 for false | +| float4, float8 | real | | +| enum | text | | +| uuid | text | | | timestamptz | text | Format: `YYYY-MM-DD hh:mm:ss.sssZ`. This is compatible with ISO8601 and SQLite's functions. Precision matches the precision used in Postgres. `-infinity` becomes `0000-01-01 00:00:00Z` and `infinity` becomes `9999-12-31 23:59:59Z`. | | timestamp | text | Format: `YYYY-MM-DD hh:mm:ss.sss`. In most cases, timestamptz should be used instead. `-infinity` becomes `0000-01-01 00:00:00` and `infinity` becomes `9999-12-31 23:59:59`. | -| date, time | text | | -| json, jsonb | text | There is no dedicated JSON type — JSON functions operate directly on text values. | -| interval | text | | -| macaddr | text | | -| inet | text | | -| bytea | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/usage/sync-rules/operators-and-functions). | -| geometry (PostGIS) | text | hex string of the binary data Use the [ST functions](/usage/sync-rules/operators-and-functions#functions) to convert to other formats | +| date, time | text | | +| json, jsonb | text | There is no dedicated JSON type — JSON functions operate directly on text values. | +| interval | text | | +| macaddr | text | | +| inet | text | | +| bytea | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/usage/sync-rules/operators-and-functions). | +| geometry (PostGIS) | text | hex string of the binary data Use the [ST functions](/usage/sync-rules/operators-and-functions#functions) to convert to other formats | There is no dedicated boolean data type. Boolean values are represented as `1` (true) or `0` (false). @@ -53,16 +53,16 @@ There is no dedicated boolean data type. Boolean values are represented as `1` ( | ObjectId | text | Lower-case hex string | | UUID | text | Lower-case hex string | | Boolean | integer | 1 for true, 0 for false | -| Date | text | Format: `YYYY-MM-DD hh:mm:ss.sss` | +| Date | text | Format: `YYYY-MM-DD hh:mm:ss.sssZ` | | Null | null | | | Binary | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/usage/sync-rules/operators-and-functions). | -| Regular Expression | text | JSON text in the format `{"pattern":"...","options":"..."}` | -| Timestamp | integer | Converted to a 64-bit integer | -| Undefined | null | | -| DBPointer | text | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` | -| JavaScript | text | JSON text in the format `{"code": "...", "scope": ...}` | -| Symbol | text | | -| MinKey, MaxKey | null | | +| Regular Expression | text | JSON text in the format `{"pattern":"...","options":"..."}` | +| Timestamp | integer | Converted to a 64-bit integer | +| Undefined | null | | +| DBPointer | text | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` | +| JavaScript | text | JSON text in the format `{"code": "...", "scope": ...}` | +| Symbol | text | | +| MinKey, MaxKey | null | | * Data is converted to a flat list of columns, one column per top-level field in the MongoDB document. * Special BSON types are converted to plain SQLite alternatives. @@ -70,8 +70,34 @@ There is no dedicated boolean data type. Boolean values are represented as `1` ( * Nested objects and arrays are converted to JSON arrays, and JSON operators can be used to query them (in the Sync Rules and/or on the client-side). * Binary data nested in objects or arrays is not supported. -## MySQL (Alpha) Type Mapping +## MySQL (Beta) Type Mapping + +MySQL values are mapped according to this table: + +| MySQL Data Type | PowerSync / SQLite Column Type | Notes | +|----------------------------------------------------|--------------------------------|-----------------------------------------------------------------------------------| +| tinyint, smallint, mediumint, bigint, integer, int | integer | | +| numeric, decimal | text | | +| bool, boolean | integer | 1 for true, 0 for false | +| float, double, real | real | | +| bit | integer | | +| enum | text | | +| set | text | Converted to JSON array | +| char, varchar | text | | +| tinytext, text, mediumtext, longtext | text | | +| timestamp | text | ISO 8061 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | +| date | text | Format: `YYYY-MM-DD` | +| time, datetime | text | ISO 8061 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | +| year | text | | +| json | text | There is no dedicated JSON type — JSON functions operate directly on text values. | +| binary, varbinary | blob | See note below regarding binary types | +| blob, tinyblob, mediumblob, longblob | blob | | +| geometry, geometrycollection | blob | | +| point, multipoint | blob | | +| linestring, multilinestring | blob | | +| polygon, multipolygon | blob | | - This section is a work in progress. More details for MySQL connections are coming soon. In the meantime, ask on our [Discord server](https://discord.gg/powersync) if you have any questions. - \ No newline at end of file + Binary data can be accessed in the Sync Rules, but cannot be used as bucket parameters. Before it can be synced directly to clients it needs to be converted to hex or base64 first. + See [Operators & Functions](/usage/sync-rules/operators-and-functions) + diff --git a/usage/tools/powersync-dashboard.mdx b/usage/tools/powersync-dashboard.mdx index d1798a0b..7d3821a0 100644 --- a/usage/tools/powersync-dashboard.mdx +++ b/usage/tools/powersync-dashboard.mdx @@ -81,6 +81,7 @@ The various actions available in your project are accessible via the Command Pal - **Compare deployed sync rules** -\> Compare the [sync rules](/usage/sync-rules) as defined in your `sync-rules.yaml` file with those deployed to an instance. - **Save changes** -\> Save changes to files as a revision when in **Basic Revisions** version control mode (see _Version Control_ below) - Or **Commit changes** -\> Commit changes to files when in **Advanced Git** version control mode. +- **Compact buckets** -\> Manually [compact](/usage/lifecycle-maintenance/compacting-buckets) and optionally [defragment](/usage/lifecycle-maintenance/compacting-buckets#defragmenting) sync buckets of an instance. - **Create Personal Access Token** -\> Create an access token scoped to your user, which is needed for the [CLI](/usage/tools/cli). - **Rename project** -\> Rename your PowerSync project. diff --git a/usage/use-case-examples.mdx b/usage/use-case-examples.mdx index dca3adbc..e7442949 100644 --- a/usage/use-case-examples.mdx +++ b/usage/use-case-examples.mdx @@ -9,11 +9,13 @@ The following examples are available to help you get started with specific use c - + + + diff --git a/usage/use-case-examples/custom-types-arrays-and-json.mdx b/usage/use-case-examples/custom-types-arrays-and-json.mdx index fc722097..e2a257c6 100644 --- a/usage/use-case-examples/custom-types-arrays-and-json.mdx +++ b/usage/use-case-examples/custom-types-arrays-and-json.mdx @@ -1,63 +1,70 @@ --- -title: "Custom Types, Arrays and JSON" -description: PowerSync is compatible with more advanced types such as arrays and JSON. +title: "JSON, Arrays and Custom Types" +description: PowerSync supports JSON/JSONB and arrays, and can sync other custom types by serializing them to text. --- -PowerSync is compatible with advanced Postgres types, including arrays and JSON/JSONB. These types are represented as text columns in the client-side schema. When updating client data, you have the option to replace the entire column value with a string or enable advanced schema features to track more granular changes and include custom metadata. +PowerSync supports JSON/JSONB and array columns. They are synced as JSON text and can be queried with SQLite JSON functions on the client. Other custom Postgres types can be synced by serializing their values to text in the client-side schema. When updating client data, you have the option to replace the entire column value with a string or enable [advanced schema options](#advanced-schema-options-to-process-writes) to track more granular changes and include custom metadata. -## Advanced Schema Options to Process Writes - -With arrays and JSON fields, it's common for only part of the value to change during an update. To make handling these writes easier, you can enable advanced schema options that let you track exactly what changed in each row—not just the new state. - -- `trackPreviousValues` (or `trackPrevious` in our JS SDKs): Access previous values for diffing custom types, arrays, or JSON fields. Accessible later via `CrudEntry.previousValues`. -- `trackMetadata`: Adds a `_metadata` column for storing custom metadata. Value of the column is accessible later via `CrudEntry.metadata`. -- `ignoreEmptyUpdates`: Skips updates when no data has actually changed. - - - These advanced schema options are available in the following SDK versions: Flutter v1.13.0, React Native v1.20.1, JavaScript/Web v1.20.1, Kotlin Multiplatform v1.1.0, Swift v1.1.0, and Node.js v0.4.0. - +## JSON and JSONB -## Custom Types +The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in Sync Rules. -PowerSync serializes custom types as text. For details, see [types in sync rules](/usage/sync-rules/types). +**Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync. ### Postgres -Postgres allows developers to create custom data types for columns. For example: +JSON columns are represented as: ```sql -create type location_address AS ( - street text, - city text, - state text, - zip numeric -); +ALTER TABLE todos +ADD COLUMN custom_payload json; ``` ### Sync Rules -Custom type columns are converted to text by the PowerSync Service. A column of type `location_address`, as defined above, would be synced to clients as the following string: - -`("1000 S Colorado Blvd.",Denver,CO,80211)` +PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as `json_extract()`. -It is not currently possible to extract fields from custom types in Sync Rules, so the entire column must be synced as text. +```yaml +bucket_definitions: + my_json_todos: + # Separate bucket per To-Do list + parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id() + data: + - SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') = bucket.list_id +``` ### Client SDK **Schema** -Add your custom type column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). +Add your JSON column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). + + ```dart + Table( + name: 'todos', + columns: [ + Column.text('custom_payload'), + // ... other columns ... + ], + // Optionally, enable advanced update tracking options (see details at the end of this page): + trackPreviousValues: true, + trackMetadata: true, + ignoreEmptyUpdates: true, + ) + ``` + + ```javascript const todos = new Table( { - location: column.text, + custom_payload: column.text, // ... other columns ... }, { - // Optionally, enable advanced update tracking options: + // Optionally, enable advanced update tracking options (see details at the end of this page): trackPrevious: true, trackMetadata: true, ignoreEmptyUpdates: true, @@ -65,22 +72,6 @@ Add your custom type column as a `text` column in your client-side schema defini ); ``` - - ```dart - Table( - name: 'todos', - columns: [ - Column.text('location'), - // ... other columns ... - ], - - // Optionally, enable advanced update tracking options: - trackPreviousValues: true, - trackMetadata: true, - ignoreEmptyUpdates: true, - ) - ``` - **Writing Changes** @@ -88,39 +79,44 @@ Add your custom type column as a `text` column in your client-side schema defini You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about: - - ```javascript + + ```dart // Full replacement (basic): - await db.execute( - 'UPDATE todos set location = ?, _metadata = ? WHERE id = ?', - ['("1234 Update Street",Denver,CO,80212)', 'op-metadata-example', 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b'] - ); + await db.execute('UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', [ + '{"foo": "bar", "baz": 123}', + 'op-metadata-example', // Example metadata value + '00000000-0000-0000-0000-000000000000' + ]); - // Diffing custom types in uploadData (advanced): - if (op.op === UpdateType.PUT && op.previousValues) { - const oldCustomType = op.previousValues['location'] ?? 'null'; - const newCustomType = op.opData['location'] ?? 'null'; - const metadata = op.metadata; // Access metadata here - // Compare oldCustomType and newCustomType to determine what changed + // Diffing columns in uploadData (advanced): + // See details about these advanced schema options at the end of this page + import 'dart:convert'; + + if (op.op == UpdateType.put && op.previousValues != null) { + var oldJson = jsonDecode(op.previousValues['custom_payload'] ?? '{}'); + var newJson = jsonDecode(op.opData['custom_payload'] ?? '{}'); + var metadata = op.metadata; // Access metadata here + // Compare oldJson and newJson to determine what changed // Use metadata as needed as you process the upload } ``` - - ```dart + + + ```javascript // Full replacement (basic): - await db.execute('UPDATE todos set location = ?, _metadata = ? WHERE id = ?', [ - '("1234 Update Street",Denver,CO,80212)', - 'op-metadata-example', // Example metadata value - 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b' - ]); + await db.execute( + 'UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', + ['{"foo": "bar", "baz": 123}', 'op-metadata-example', '00000000-0000-0000-0000-000000000000'] + ); - // Diffing custom types in uploadData (advanced): - if (op.op == UpdateType.put && op.previousValues != null) { - final oldCustomType = op.previousValues['location'] ?? 'null'; - final newCustomType = op.opData['location'] ?? 'null'; - final metadata = op.metadata; // Access metadata here - // Compare oldCustomType and newCustomType to determine what changed + // Diffing columns in uploadData (advanced): + // See details about these advanced schema options at the end of this page + if (op.op === UpdateType.PUT && op.previousValues) { + const oldJson = JSON.parse(op.previousValues['custom_payload'] ?? '{}'); + const newJson = JSON.parse(op.opData['custom_payload'] ?? '{}'); + const metadata = op.metadata; // Access metadata here + // Compare oldJson and newJson to determine what changed // Use metadata as needed as you process the upload } ``` @@ -183,7 +179,7 @@ Add your array column as a `text` column in your client-side schema definition. // ... other columns ... }, { - // Optionally, enable advanced update tracking options: + // Optionally, enable advanced update tracking options (see details at the end of this page): trackPrevious: true, trackMetadata: true, ignoreEmptyUpdates: true, @@ -200,7 +196,7 @@ Add your array column as a `text` column in your client-side schema definition. // ... other columns ... ], - // Optionally, enable advanced update tracking options: + // Optionally, enable advanced update tracking options (see details at the end of this page): trackPreviousValues: true, trackMetadata: true, ignoreEmptyUpdates: true, @@ -222,7 +218,8 @@ You can write the entire updated column value as a string, or, with `trackPrevio ['["DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF"]', 'op-metadata-example', '00000000-0000-0000-0000-000000000000'] ); - // Diffing custom types in uploadData (advanced): + // Diffing columns in uploadData (advanced): + // See details about these advanced schema options at the end of this page if (op.op === UpdateType.PUT && op.previousValues) { const oldArray = JSON.parse(op.previousValues['unique_identifiers'] ?? '[]'); const newArray = JSON.parse(op.opData['unique_identifiers'] ?? '[]'); @@ -241,8 +238,9 @@ You can write the entire updated column value as a string, or, with `trackPrevio '00000000-0000-0000-0000-000000000000' ]); - // Diffing custom types in uploadData (advanced): - if (op.op == UpdateType.put && op.previousValues != null) { + // Diffing columns in uploadData (advanced): + // See details about these advanced schema options at the end of this page + if (op.op == UpdateType put && op.previousValues != null) { final oldArray = jsonDecode(op.previousValues['unique_identifiers'] ?? '[]'); final newArray = jsonDecode(op.opData['unique_identifiers'] ?? '[]'); final metadata = op.metadata; // Access metadata here @@ -257,66 +255,47 @@ You can write the entire updated column value as a string, or, with `trackPrevio **Attention Supabase users:** Supabase can handle writes with arrays, but you must convert from string to array using `jsonDecode` in the connector's `uploadData` function. The default implementation of `uploadData` does not handle complex types like arrays automatically. -## JSON and JSONB - -The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in Sync Rules. +## Custom Types -**Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync. +PowerSync serializes custom types as text. For details, see [types in sync rules](/usage/sync-rules/types). ### Postgres -JSON columns are represented as: +Postgres allows developers to create custom data types for columns. For example: ```sql -ALTER TABLE todos -ADD COLUMN custom_payload json; +create type location_address AS ( + street text, + city text, + state text, + zip numeric +); ``` ### Sync Rules -PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as `json_extract()`. +Custom type columns are converted to text by the PowerSync Service. A column of type `location_address`, as defined above, would be synced to clients as the following string: -```yaml -bucket_definitions: - my_json_todos: - # Separate bucket per To-Do list - parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id() - data: - - SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') = bucket.list_id -``` +`("1000 S Colorado Blvd.",Denver,CO,80211)` + +It is not currently possible to extract fields from custom types in Sync Rules, so the entire column must be synced as text. ### Client SDK **Schema** -Add your JSON column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). +Add your custom type column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). - - ```dart - Table( - name: 'todos', - columns: [ - Column.text('custom_payload'), - // ... other columns ... - ], - // Optionally, enable advanced update tracking options: - trackPreviousValues: true, - trackMetadata: true, - ignoreEmptyUpdates: true, - ) - ``` - - ```javascript const todos = new Table( { - custom_payload: column.text, + location: column.text, // ... other columns ... }, { - // Optionally, enable advanced update tracking options: + // Optionally, enable advanced update tracking options (see details at the end of this page): trackPrevious: true, trackMetadata: true, ignoreEmptyUpdates: true, @@ -324,6 +303,22 @@ Add your JSON column as a `text` column in your client-side schema definition. F ); ``` + + ```dart + Table( + name: 'todos', + columns: [ + Column.text('location'), + // ... other columns ... + ], + + // Optionally, enable advanced update tracking options (see details at the end of this page): + trackPreviousValues: true, + trackMetadata: true, + ignoreEmptyUpdates: true, + ) + ``` + **Writing Changes** @@ -331,42 +326,41 @@ Add your JSON column as a `text` column in your client-side schema definition. F You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about: - - ```dart - // Full replacement (basic): - await db.execute('UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', [ - '{"foo": "bar", "baz": 123}', - 'op-metadata-example', // Example metadata value - '00000000-0000-0000-0000-000000000000' - ]); - - // Diffing custom types in uploadData (advanced): - import 'dart:convert'; - - if (op.op == UpdateType.put && op.previousValues != null) { - var oldJson = jsonDecode(op.previousValues['custom_payload'] ?? '{}'); - var newJson = jsonDecode(op.opData['custom_payload'] ?? '{}'); - var metadata = op.metadata; // Access metadata here - // Compare oldJson and newJson to determine what changed - // Use metadata as needed as you process the upload - } - ``` - - ```javascript // Full replacement (basic): await db.execute( - 'UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', - ['{"foo": "bar", "baz": 123}', 'op-metadata-example', '00000000-0000-0000-0000-000000000000'] + 'UPDATE todos set location = ?, _metadata = ? WHERE id = ?', + ['("1234 Update Street",Denver,CO,80212)', 'op-metadata-example', 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b'] ); - // Diffing custom types in uploadData (advanced): + // Diffing columns in uploadData (advanced): + // See details about these advanced schema options at the end of this page if (op.op === UpdateType.PUT && op.previousValues) { - const oldJson = JSON.parse(op.previousValues['custom_payload'] ?? '{}'); - const newJson = JSON.parse(op.opData['custom_payload'] ?? '{}'); + const oldCustomType = op.previousValues['location'] ?? 'null'; + const newCustomType = op.opData['location'] ?? 'null'; const metadata = op.metadata; // Access metadata here - // Compare oldJson and newJson to determine what changed + // Compare oldCustomType and newCustomType to determine what changed + // Use metadata as needed as you process the upload + } + ``` + + + ```dart + // Full replacement (basic): + await db.execute('UPDATE todos set location = ?, _metadata = ? WHERE id = ?', [ + '("1234 Update Street",Denver,CO,80212)', + 'op-metadata-example', // Example metadata value + 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b' + ]); + + // Diffing columns in uploadData (advanced): + // See details about these advanced schema options at the end of this page + if (op.op == UpdateType.put && op.previousValues != null) { + final oldCustomType = op.previousValues['location'] ?? 'null'; + final newCustomType = op.opData['location'] ?? 'null'; + final metadata = op.metadata; // Access metadata here + // Compare oldCustomType and newCustomType to determine what changed // Use metadata as needed as you process the upload } ``` @@ -389,3 +383,20 @@ ALTER TABLE todos ADD COLUMN custom_locations extended_location[]; ``` +## Advanced Schema Options to Process Writes + +With arrays and JSON fields, it's common for only part of the value to change during an update. To make handling these writes easier, you can enable advanced schema options that let you track exactly what changed in each row—not just the new state. + +- `trackPreviousValues` (or `trackPrevious` in our JS SDKs): Access previous values for diffing JSON or array fields. Accessible later via `CrudEntry.previousValues`. +- `trackMetadata`: Adds a `_metadata` column for storing custom metadata. Value of the column is accessible later via `CrudEntry.metadata`. +- `ignoreEmptyUpdates`: Skips updates when no data has actually changed. + + + These advanced schema options are available in the following SDK versions: + - Flutter v1.13.0 + - React Native v1.20.1 + - JavaScript/Web v1.20.1 + - Kotlin Multiplatform v1.1.0 + - Swift v1.1.0 + - Node.js v0.4.0. + \ No newline at end of file diff --git a/usage/use-case-examples/high-performance-diffs.mdx b/usage/use-case-examples/high-performance-diffs.mdx new file mode 100644 index 00000000..ff54ed57 --- /dev/null +++ b/usage/use-case-examples/high-performance-diffs.mdx @@ -0,0 +1,165 @@ +--- +title: 'Experimental: High Performance Diffs' +sidebarTitle: 'High Performance Diffs' +description: 'Efficiently get row changes using trigger-based table diffs (JS)' +--- + +# Overview + +While [basic/incremental watch queries](/usage/use-case-examples/watch-queries) enable reactive UIs by automatically re‑running queries when underlying data changes and returning updated results, they don't specify which individual rows were modified. To get these details, you can use [**differential watch queries**](/usage/use-case-examples/watch-queries#differential-watch-queries), which return a structured diff between successive query results. However, on large result sets they can be slow because they re‑run the query and compare full results (e.g., scanning ~1,000 rows to detect 1 new item). That’s why we introduced **trigger‑based table diffs**: a more performant approach that uses SQLite triggers to record changes on a table as they happen. This means that the overhead associated with tracking these changes overhead is more proportional to the number of rows inserted, updated, or deleted. + + + **JavaScript Only**: Trigger-based table diffs are available in the JavaScript SDKs starting from: + * Web v1.26.0 + * React Native v1.24.0 + * Node.js v0.10.0 + + + +The `db.triggers` APIs are experimental. We're actively seeking feedback on: + +- API design and developer experience +- Additional features or optimizations needed + +Join our [Discord community](https://discord.gg/powersync) to share your experience and get help. + + +## Key differences vs. differential watch queries + +- **Scope**: Trigger-based diffs track row-level changes on a single table. Differential watches work with arbitrary query results (including joins). +- **Overhead**: Trigger-based diffs do per-row work at write time (overhead grows with number of affected rows). Differential watches re-query and compare result sets on each change (overhead grows with result set size). +- **Processing path**: Trigger-based diffs record changes at write time and require a `writeLock` during processing (only a single `writeLock` is allowed). Differential watches run on read connections and re-query/compare results on each change (often concurrent on some platforms). +- **Storage/shape**: Trigger-based diffs store changes as rows in a temporary SQLite table that you can query with SQL. Differential watch diffs are exposed to app code as JS objects/arrays. +- **Filtering**: Trigger-based diffs can filter/skip storing diff records inside the SQLite trigger, which prevents emissions on a lower level. Differential watches query the SQLite DB on any change to the query's dependent tables, and the changes are filtered after querying SQLite. + +In summary, **differential watch queries** are the most flexible (they work with arbitrary, multi‑table queries), but they can be slow on large result sets. For those cases, **trigger-based diffs** are more efficient, but they only track a single table and add some write overhead. For usage and examples of differential watch queries, see [Differential Watch Queries](/usage/use-case-examples/watch-queries#differential-watch-queries). + + +## Trigger-based diffs + +Trigger-based diffs create temporary SQLite triggers and a temporary table to record row‑level inserts, updates, and deletes as they happen. You can then query the diff table with SQL to process the changes. + + + **SQLite triggers and PowerSync views** + + In PowerSync, the tables you define in the client schema are exposed as SQLite views. The actual data is stored in underlying SQLite tables, with each row's values encoded as JSON (commonly in a single `data` column). + + SQLite cannot attach triggers to INSERT/UPDATE/DELETE operations on views — triggers must target the underlying base tables. The `db.triggers` API handles these details for you: + + - You can reference the view name in `source`; PowerSync resolves and targets the corresponding underlying table internally. + - Column filters are applied by inspecting JSON changes in the underlying row and determining whether the configured columns changed. + - Diff rows can be queried as if they were real columns (not raw JSON) using the `withExtractedDiff(...)` helper. + + You can also create your own triggers manually (for example, as shown in the [Full‑Text Search example](/usage/use-case-examples/full-text-search)), but be mindful of the view/trigger limitation and target the underlying table rather than the view. + + +## Tracking and reacting to changes (recommended) + +The primary API is `trackTableDiff`. It wraps the lower-level trigger setup, automatically manages a `writeLock` during processing, exposes a `DIFF` table alias to join against, and cleans up when you call the returned `stop()` function. Think of it as an automatic "watch" that processes diffs as they occur. + +```javascript +const stop = await db.triggers.trackTableDiff({ + // PowerSync source table/view to trigger and track changes from. + // This should be present in the PowerSync database's schema. + source: 'todos', + // Specifies which columns from the source table to track in the diff records. + // Defaults to all columns in the source table. + // Use an empty array to track only the ID and operation. + columns: ['list_id'], + // Required WHEN clause per operation to filter inside the trigger. Use 'TRUE' to track all. + when: { INSERT: sanitizeSQL`json_extract(NEW.data, '$.list_id') = ${firstList.id}` }, + onChange: async (context) => { + // // Fetches the todo records that were inserted during this diff + const newTodos = await context.getAll(/* sql */ ` + SELECT todos.* + FROM DIFF + JOIN todos ON DIFF.id = todos.id + `); + + // Handle new todos here + } +}); + +// Later, dispose triggers and internal resources +await stop(); +``` + +### Filtering with `when` + +The required `when` parameter lets you add conditions that determine when the triggers should fire. This corresponds to a SQLite [WHEN](https://sqlite.org/lang_createtrigger.html) clause in the trigger body. + +- Use `NEW` for `INSERT`/`UPDATE` and `OLD` for `DELETE`. +- Row data is stored as JSON in the `data` column; the row identifier is `id`. +- Use `json_extract(NEW.data, '$.column')` or `json_extract(OLD.data, '$.column')` to reference logical columns. +- Set the clause to `'TRUE'` to track all changes for a given operation. + +Example: + +```javascript +const stop = await db.triggers.trackTableDiff({ + source: 'todos', + when: { + // Track all INSERTs + INSERT: 'TRUE', + // Only UPDATEs where status becomes 'active' for a specific record + UPDATE: sanitizeSQL`NEW.id = ${sanitizeUUID('abcd')} AND json_extract(NEW.data, '$.status') = 'active'`, + // Only DELETEs for a specific list + DELETE: sanitizeSQL`json_extract(OLD.data, '$.list_id') = 'abcd'` + } +}); +``` + + + The strings in `when` are embedded directly into the SQLite trigger creation SQL. Sanitize any user‑derived values. The `sanitizeSQL` helper performs some basic sanitization; additional sanitization is recommended. + + +## Lower-level: createDiffTrigger (advanced) + +Set up temporary triggers that write change operations into a temporary table you control. Prefer `trackTableDiff` unless you need to manage lifecycle and locking manually (e.g., buffer diffs to process them later). Note that since the table is created as a temporary table on the SQLite write connection, it can only be accessed within operations performed inside a writeLock. + +```javascript +// Define the temporary table to store the diff +const tempTable = 'listsDiff'; + +// Configure triggers to record INSERT and UPDATE operations on `lists` +const dispose = await db.triggers.createDiffTrigger({ + // PowerSync source table/view to trigger and track changes from. + // This should be present in the PowerSync database's schema. + source: 'lists', + // Destination table to send changes to. + // This table is created internally as a SQLite temporary table. + // This table will be dropped once the trigger is removed. + destination: tempTable, + // Required WHEN clause per operation to filter inside the trigger. Use 'TRUE' to track all. + when: { + INSERT: 'TRUE', + UPDATE: sanitizeSQL`json_extract(NEW.data, '$.name') IS NOT NULL` + }, + // Specifies which columns from the source table to track in the diff records. + // Defaults to all columns in the source table. + // Use an empty array to track only the ID and operation. + columns: ['name'] +}); + +// ... perform writes on `lists` ... + +// Consume and clear changes within a writeLock +await db.writeLock(async (tx) => { + const changes = await tx.getAll(/* sql */ ` + SELECT * FROM ${tempTable} + `); + + // Process changes here + + // Clear after processing + await tx.execute(/* sql */ `DELETE FROM ${tempTable};`); +}); + +// Later, clean up triggers and temp table +await dispose(); +``` + + + + + diff --git a/usage/use-case-examples/raw-tables.mdx b/usage/use-case-examples/raw-tables.mdx index 48617a3d..43036753 100644 --- a/usage/use-case-examples/raw-tables.mdx +++ b/usage/use-case-examples/raw-tables.mdx @@ -33,12 +33,15 @@ Also note that raw tables are only supported by the new [Rust-based sync client] Consider raw tables when you need: -- **Advanced SQLite features** like `FOREIGN KEY` and `ON DELETE CASCADE` constraints - **Indexes** - PowerSync's default schema has basic support for indexes on columns, while raw tables give you complete control to create indexes on expressions, use `GENERATED` columns, etc - **Improved performance** for complex queries (e.g., `SELECT SUM(value) FROM transactions`) - raw tables more efficiently get these values directly from the SQLite column, instead of extracting the value from the JSON object on every row - **Reduced storage overhead** - eliminate JSON object overhead for each row in `ps_data__
.data` column - **To manually create tables** - Sometimes you need full control over table creation, for example when implementing custom triggers + +**Advanced SQLite features** like `FOREIGN KEY` and `ON DELETE CASCADE` constraints may need special handling. If your use case requires these features, please reach out to us for guidance and potential workarounds. + + ## How Raw Tables Work ### Current JSON-Based System @@ -58,8 +61,8 @@ When opting in to raw tables, you are responsible for creating the tables before Because PowerSync takes no control over raw tables, you need to manually: -1. Tell PowerSync how to map the [schemaless protocol](/architecture/powersync-protocol#protocol) to your raw tables when syncing data. -2. Configure custom triggers to forward local writes to PowerSync. +1. Define how PowerSync's [schemaless protocol](/architecture/powersync-protocol#protocol) maps to your raw tables — see [Define sync mapping for raw tables](#define-sync-mapping-for-raw-tables) +2. Define triggers that capture local writes from raw tables — see [Capture local writes with triggers](#capture-local-writes-with-triggers) For the purpose of this example, consider a simple table like this: @@ -72,7 +75,7 @@ CREATE TABLE todo_lists ( ) STRICT; ``` -#### Syncing into raw tables +#### Define sync mapping for raw tables To sync into the raw `todo_lists` table instead of `ps_data__`, PowerSync needs the SQL statements extracting columns from the untyped JSON protocol used during syncing. @@ -206,7 +209,7 @@ Unfortunately, raw tables are not available in the .NET SDK yet. After adding raw tables to the schema, you're also responsible for creating them by executing the corresponding `CREATE TABLE` statement before `connect()`-ing the database. -#### Collecting local writes on raw tables +#### Capture local writes with triggers PowerSync uses an internal SQLite table to collect local writes. For PowerSync-managed views, a trigger for insertions, updates and deletions automatically forwards local mutations into this table. diff --git a/usage/use-case-examples/watch-queries.mdx b/usage/use-case-examples/watch-queries.mdx new file mode 100644 index 00000000..912be10d --- /dev/null +++ b/usage/use-case-examples/watch-queries.mdx @@ -0,0 +1,460 @@ +--- +title: 'Live Queries / Watch Queries' +sidebarTitle: 'Live Queries' +description: 'Subscribe to real-time data changes with reactive watch queries' +--- + +import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; +import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; +import FlutterWatch from '/snippets/flutter/basic-watch-query.mdx'; +import KotlinWatch from '/snippets/kotlin-multiplatform/basic-watch-query.mdx'; +import SwiftWatch from '/snippets/swift/basic-watch-query.mdx'; +import DotNetWatch from '/snippets/dotnet/basic-watch-query.mdx'; + +Watch queries, also known as live queries, are essential for building reactive apps where the UI automatically updates when the underlying data changes. PowerSync's watch functionality allows you to subscribe to SQL query results and receive updates whenever the dependent tables are modified. + +# Overview + +PowerSync provides multiple approaches to watching queries, each designed for different use cases and performance requirements: + +1. **Basic Watch Queries** - These queries work across all SDKs, providing real-time updates when dependent tables change +2. **Incremental Watch Queries** - Only emit updates when data actually changes, preventing unnecessary re-renders +3. **Differential Watch Queries** - Provide detailed information about what specifically changed between result sets + +Choose the approach that best fits your platform and performance needs. + +# Basic Watch Queries + +PowerSync supports the following basic watch queries based on your platform. These APIs return query results whenever the underlying tables change and are available across all SDKs. + +Scroll horizontally to find your preferred platform/framework for an example: + + + + + + This method is only being maintained for backwards compatibility purposes. Use the improved `db.query.watch()` API instead (see [Incremental Watch Queries](#incremental-watch-queries) below). + + +The original watch method using the AsyncIterator pattern. This is the foundational watch API that works across all JavaScript environments and is being maintained for backwards compatibility. + + + + + + + + This method is only being maintained for backwards compatibility purposes. Use the improved `db.query.watch()` API instead (see [Incremental Watch Queries](#incremental-watch-queries) below). + + +The callback-based watch method that doesn't require AsyncIterator polyfills. Use this approach when you need smoother React Native compatibility or prefer synchronous method signatures: + + + + + + +React hook that combines watch functionality with built-in loading, fetching, and error states. Use this when you need convenient state management without React Suspense: + +```javascript +const { + data: pendingLists, + isLoading, + isFetching, + error +} = useQuery('SELECT * FROM lists WHERE state = ?', ['pending']); +``` + + + + +React Suspense-based hook that automatically handles loading and error states through Suspense boundaries. Use this when you want to leverage React's concurrent features and avoid manual state handling: + +```javascript +const { data: pendingLists } = useSuspenseQuery('SELECT * FROM lists WHERE state = ?', ['pending']); +``` + + + + +Vue composition API hook with built-in loading, fetching, and error states. Use this for reactive watch queries in Vue applications: + +```javascript +const { + data: pendingLists, + isLoading, + isFetching, + error +} = useQuery('SELECT * FROM lists WHERE state = ?', ['pending']); +``` + + + + +Use this method to watch for changes to the dependent tables of any SQL query: + + + + + + +Use this method to watch for changes to the dependent tables of any SQL query: + + + + + + +Use this method to watch for changes to the dependent tables of any SQL query: + + + + + + +Use this method to watch for changes to the dependent tables of any SQL query: + + + + + + +# Incremental Watch Queries + +Basic watch queries can cause performance issues in UI frameworks like React because they return new data on every dependent table change, even when the actual data in the query hasn't changed. This can lead to excessive re-renders as components receive updates unnecessarily. + +Incremental watch queries solve this by comparing result sets using configurable comparators and only emitting updates when the comparison detects actual data changes. These queries still query the SQLite DB under the hood on each dependent table change, but compare the result sets and only yield results if a change has been made. + + + **JavaScript Only**: Incremental and differential watch queries are currently only available in the JavaScript SDKs starting from: + * Web v1.25.0 + * React Native v1.23.1 + * Node.js v0.8.1 + + +Basic Syntax: + +```javascript +db.query({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['pending'] }).watch(); +``` + +Scroll horizontally to find your preferred approach for an example: + + + + +`WatchedQuery` class that comes with a better API in that it includes loading, fetching and error states, supports multiple listeners, automatic cleanup on PowerSync close, and the new `updateSettings()` API for dynamic parameter changes. This is the preferred approach for JavaScript SDKs: + +```javascript +// Create an instance of a WatchedQuery +const pendingLists = db + .query({ + sql: 'SELECT * FROM lists WHERE state = ?', + parameters: ['pending'] + }) + .watch(); + +// The registerListener method can be used multiple times to listen for updates +const dispose = pendingLists.registerListener({ + onData: (data) => { + // This callback will be called whenever the data changes + console.log('Data updated:', data); + }, + onStateChange: (state) => { + // This callback will be called whenever the state changes + // The state contains metadata about the query, such as isFetching, isLoading, etc. + console.log('State changed:', state.error, state.isFetching, state.isLoading, state.data); + }, + onError: (error) => { + // This callback will be called if the query fails + console.error('Query error:', error); + } +}); +``` + + + + +`WatchedQuery` class with configurable comparator that compares result sets before emitting to listeners, preventing unnecessary listener invocations when data hasn't changed. Use this when you want shared query instances plus result set comparison for incremental updates: + +```javascript +// Create an instance of a WatchedQuery +const pendingLists = db + .query({ + sql: 'SELECT * FROM lists WHERE state = ?', + parameters: ['pending'] + }) + .watch({ + comparator: { + checkEquality: (current, previous) => { + // This comparator will only report updates if the data changes. + return JSON.stringify(current) === JSON.stringify(previous); + } + } + }); + +// Register listeners as before... +``` + + + + +React hook that that preserves object references for unchanged items and uses row-level comparators to minimize re-renders. Use this when you want built-in state management plus incremental updates for React components: + +```javascript +const { + data: pendingLists, + isLoading, + isFetching, + error +} = useQuery('SELECT * FROM lists WHERE state = ?', ['pending'], { + rowComparator: { + keyBy: (item) => item.id, + compareBy: (item) => JSON.stringify(item) + } +}); +``` + + + + +React Suspense hook that preserves object references for unchanged items and uses row-level comparators to minimize re-renders. Use this when you want concurrent React features, automatic state handling, and memoization-friendly object stability: + +```javascript +const { data: lists } = useSuspenseQuery('SELECT * FROM lists WHERE state = ?', ['pending'], { + rowComparator: { + keyBy: (item) => item.id, + compareBy: (item) => JSON.stringify(item) + } +}); +``` + + + + +Providing a `rowComparator` to the React hooks ensures that components only re-render when the query result actually changes. When combined with React memoization (e.g., `React.memo`) on row components that receive query row objects as props, this approach prevents unnecessary updates at the individual row component level, resulting in more efficient UI rendering. + +```jsx +const TodoListsWidget = () => { + const { data: lists } = useQuery('[SQL]', [...parameters], { rowComparator: DEFAULT_ROW_COMPARATOR }); + + return ( + + { + // The individual row widgets will only re-render if the corresponding row has changed + lists.map((listRecord) => ( + + )) + } + + ); +}; + +const TodoWidget = React.memo(({ record }) => { + return {record.name}; +}); +``` + + + + + +Existing AsyncIterator API with configurable comparator that compares current and previous result sets, only yielding when the comparator detects changes. Use this if you want to maintain the familiar AsyncIterator pattern from the basic watch query API: + +```javascript +async function* pendingLists(): AsyncIterable { + for await (const result of db.watch('SELECT * FROM lists WHERE state = ?', ['pending'], { + comparator: { + checkEquality: (current, previous) => JSON.stringify(current) === JSON.stringify(previous) + } + })) { + yield result.rows?._array ?? []; + } +} +``` + + + + +Existing Callback API with configurable comparator that compares result sets and only invokes the callback when changes are detected. Use this if you want to maintain the familiar callback pattern from the basic watch query API: + +```javascript +const pendingLists = (onResult: (lists: any[]) => void): void => { + db.watch( + 'SELECT * FROM lists WHERE state = ?', + ['pending'], + { + onResult: (result: any) => { + onResult(result.rows?._array ?? []); + } + }, + { + comparator: { + checkEquality: (current, previous) => { + // This comparator will only report updates if the data changes. + return JSON.stringify(current) === JSON.stringify(previous); + } + } + } + ); +}; +``` + + + + + + +# Differential Watch Queries + +Differential queries go a step further than incremental watched queries by computing and reporting diffs between result sets (added/removed/updated items) while preserving object references for unchanged items. This enables more precise UI updates. + + + **JavaScript Only**: Incremental and differential watch queries are currently only available in the JavaScript SDKs starting from: + * Web v1.25.0 + * React Native v1.23.1 + * Node.js v0.8.1 + + + +For large result sets where re-running and comparing full query results becomes expensive, consider using trigger-based table diffs. See [High Performance Diffs](/usage/use-case-examples/high-performance-diffs). + + +Basic syntax: + +```javascript +db.query({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['pending'] }).differentialWatch(); +``` + +Use differential watch when you need to know exactly which items were added, removed, or updated rather than re-processing entire result sets: + +```javascript +// Create an instance of a WatchedQuery +const pendingLists = db + .query({ + sql: 'SELECT * FROM lists WHERE state = ?', + parameters: ['pending'] + }) + .differentialWatch(); + +// The registerListener method can be used multiple times to listen for updates +const dispose = pendingLists.registerListener({ + onData: (data) => { + // This callback will be called whenever the data changes + console.log('Data updated:', data); + }, + onStateChange: (state) => { + // This callback will be called whenever the state changes + // The state contains metadata about the query, such as isFetching, isLoading, etc. + console.log('State changed:', state.error, state.isFetching, state.isLoading, state.data); + }, + onError: (error) => { + // This callback will be called if the query fails + console.error('Query error:', error); + }, + onDiff: (diff) => { + // This callback will be called whenever the data changes. + console.log('Data updated:', diff.added, diff.updated); + } +}); +``` + +By default, the `differentialWatch()` method uses a `DEFAULT_ROW_COMPARATOR`. This comparator identifies (keys) each row by its `id` column if present, or otherwise by the JSON string of the entire row. For row comparison, it uses the JSON string representation of the full row. This approach is generally safe and effective for most queries. + +For some queries, performance could be improved by supplying a custom `rowComparator`. Such as comparing by a `hash` column generated or stored in SQLite. These hashes currently require manual implementation. + +```javascript +const pendingLists = db + .query({ + sql: 'SELECT * FROM lists WHERE state = ?', + parameters: ['pending'] + }) + .differentialWatch({ + rowComparator: { + keyBy: (item) => item.id, + compareBy: (item) => item._hash + } + }); +``` + + + The [Yjs Document Collaboration Demo + app](https://github.com/powersync-ja/powersync-js/tree/main/demos/yjs-react-supabase-text-collab) showcases the use of + differential watch queries. New document updates are passed to Yjs for consolidation as they are synced. See the + implementation + [here](https://github.com/powersync-ja/powersync-js/blob/main/demos/yjs-react-supabase-text-collab/src/library/powersync/PowerSyncYjsProvider.ts) + for more details. + + +# The `WatchedQuery` Class + +Both incremental and differential queries use the new `WatchedQuery` class. This class, along with a new `query` method allows building instances of `WatchedQuery`s via the `watch` and `differentialWatch` methods: + +```javascript +const watchedQuery = db.query({ sql: 'SELECT * FROM lists', parameters: [] }).watch(); +``` + +This class provides advanced features: + +- Automatically reprocesses itself if the PowerSync schema has been updated with `updateSchema`. +- Automatically closes itself when the PowerSync client has been closed. +- Allows for the query parameters to be updated after instantiation. +- Allows shared listening to state changes. +- New `updateSettings` API for dynamic parameter updates (see below). + +## Query Sharing + +`WatchedQuery` instances can be shared across components: + +```javascript +// Create a shared query instance +const sharedListsQuery = db.query({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['pending'] }).watch(); + +// Multiple components can listen to the same query +const dispose1 = sharedListsQuery.registerListener({ + onData: (data) => updatePendingListsDisplay(data) +}); + +const dispose2 = sharedListsQuery.registerListener({ + onData: (data) => updatePendingListsCount(data.length) +}); +``` + +## Dynamic Parameter Updates + +Update query parameters to affect all subscribers of the query: + +```javascript +// Updates to query parameters can be performed in a single place, affecting all subscribers +sharedListsQuery.updateSettings({ + query: { sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['canceled'] } +}); +``` + +## React Hook for External WatchedQuery Instances + +When you need to share query instances across components or manage their lifecycle independently from component mounting, use the `useWatchedQuerySubscription` hook. This is ideal for global state management, query caching, or when multiple components need to subscribe to the same data: + +```javascript +// Managing the WatchedQuery externally can extend its lifecycle and allow in-memory caching between components. +const pendingLists = db + .query({ + sql: 'SELECT * FROM lists WHERE state = ?', + parameters: ['pending'] + }) + .watch(); + +// In the component +export const MyComponent = () => { + // In React one could import the `pendingLists` query or create a context provider for various queries + const { data } = useWatchedQuerySubscription(pendingLists); + + return ( +
+ {data.map((item) => ( +
{item.name}
+ ))} +
+ ); +}; +```