-
Notifications
You must be signed in to change notification settings - Fork 46
Improve Connection Locks #607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🦋 Changeset detectedLatest commit: c165795 The changes in this PR will be included in the next version bump. This PR includes changesets to release 7 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves the connection handling logic by ensuring that only one connection attempt is active at a time and by tracking the latest parameters passed to connect calls. Key changes include:
- Adding abort signal checks and listeners in the mock stream factory.
- Refactoring connection logic and exclusive locking mechanisms in multiple platform-specific implementations.
- Enhancing the test suite to assert that only the expected number of connection stream implementations are created.
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
packages/web/tests/utils/MockStreamOpenFactory.ts | Adds abort signal checks to close streams early when aborted. |
packages/web/tests/stream.test.ts | Updates test to track generated streams and asserts on connection attempt counts. |
packages/web/tests/src/db/PowersyncDatabase.test.ts | Refactors logger usage for improved test observability. |
packages/web/tests/src/db/AbstractPowerSyncDatabase.test.ts | Updates runExclusive implementation and imports for testing consistency. |
packages/web/src/db/PowerSyncDatabase.ts | Removes in-method exclusive lock in favor of a new connectExclusive helper. |
packages/react-native/src/db/PowerSyncDatabase.ts, packages/node/src/db/PowerSyncDatabase.ts | Introduces async-lock integration via a dedicated lock instance. |
packages/common/src/client/sync/stream/AbstractStreamingSyncImplementation.ts | Updates abort controller and listener disposal for cleaner listener handling. |
packages/common/src/client/AbstractPowerSyncDatabase.ts | Revises connection/disconnect logic to properly manage pending connection options. |
.changeset/empty-pants-give.md | Updates changelog with minor version bumps and a brief description of improved connect behavior. |
Co-authored-by: Copilot <[email protected]>
Thanks a lot @stevensJourney. Very exciting work! Just tested the dev packages and wanted to share some preliminary observations: after expanding nodes, it seems that one stream keeps reconnecting every few seconds without any action from my end. Is this expected? Otherwise, I'll later do another pass to figure out what could be the issue. Screen.Recording.2025-05-27.at.04.03.21.mov |
Hi @obezzad , that behaviour is not expected. I could also not reproduce periodic new connections on my end yet. I was able to reproduce a deadlock in our React Supabase demo after connecting multiple times in multiple tab mode. I'll investigate the deadlock in more detail soon. |
@@ -187,9 +187,6 @@ export class SharedWebStreamingSyncImplementation extends WebStreamingSyncImplem | |||
*/ | |||
async connect(options?: PowerSyncConnectionOptions): Promise<void> { | |||
await this.waitForReady(); | |||
// This is needed since a new tab won't have any reference to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not needed anymore. Each connect internally is guaranteed to do a disconnect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves connection performance by consolidating multiple rapid calls to connect into fewer connection attempts while ensuring only one active connection at a time. Key changes include buffering the latest connection parameters, introducing a new ConnectionManager to coordinate connect/disconnect operations, and updating both the implementation and tests across web, node, and common packages.
Reviewed Changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
packages/web/tests/utils/MockStreamOpenFactory.ts | Added abort-signal checks to close stream controllers early. |
packages/web/tests/stream.test.ts | Updated tests to include logger usage and verify connection consolidation. |
packages/web/tests/src/db/PowersyncDatabase.test.ts | Reworked logger setup for more robust logging in connection tests. |
packages/web/tests/src/db/AbstractPowerSyncDatabase.test.ts | Minor import reordering and consistency updates. |
packages/web/src/worker/sync/SharedSyncImplementation.worker.ts | Changed onconnect to an async handler to await port addition. |
packages/web/src/worker/sync/SharedSyncImplementation.ts | Refactored port handling with portMutex and improved disconnect/reconnect logic. |
packages/web/src/db/sync/SharedWebStreamingSyncImplementation.ts | Removed forced disconnect in connect and added close acknowledgment handling. |
packages/web/src/db/PowerSyncDatabase.ts | Delegated connect/disconnect logic to the ConnectionManager. |
packages/node/src/db/PowerSyncDatabase.ts | Minor import order adjustments. |
packages/common/src/index.ts | Modified exports to reflect new structure. |
packages/common/src/client/sync/stream/AbstractStreamingSyncImplementation.ts | Enhanced error handling and retry delay logic during streaming sync. |
packages/common/src/client/ConnectionManager.ts | Introduced a new ConnectionManager coordinating connection attempts and disconnects. |
packages/common/src/client/AbstractPowerSyncDatabase.ts | Integrated ConnectionManager for connection operations using an exclusive lock. |
packages/common/rollup.config.mjs | Updated variable naming for source map configuration. |
.changeset/empty-pants-give.md | Updated changeset reflecting improvements in connect behavior. |
Comments suppressed due to low confidence (3)
packages/web/src/worker/sync/SharedSyncImplementation.worker.ts:15
- Marking the onconnect handler as async may delay port connection; ensure any dependent code properly handles the asynchronous resolution.
async function (event: MessageEvent<string>) {
packages/web/src/db/sync/SharedWebStreamingSyncImplementation.ts:210
- Ensure that the event listener for CLOSE_ACK is properly removed after resolution to prevent potential memory leaks over time.
await new Promise<void>((resolve) => {
packages/common/src/client/AbstractPowerSyncDatabase.ts:471
- [nitpick] Ensure that all custom connection options passed to the ConnectionManager are fully compatible with its implementation to avoid any unintended regressions.
async connect(connector: PowerSyncBackendConnector, options?: PowerSyncConnectionOptions) {
packages/common/src/client/sync/stream/AbstractStreamingSyncImplementation.ts
Outdated
Show resolved
Hide resolved
* In some cases, such as React Native iOS, the WebSocket connection may close immediately after opening | ||
* without and error. In such cases, we need to handle the close event to reject the promise. | ||
*/ | ||
const closeListener = () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a tag-along fix.
While regression testing I noticed some weird behaviour in React Native on iOS.
Stopping my local PowerSync service while my iOS client was connected would result in the active connection aborting and retrying a connection as expected, but the first attempt after stopping the service would never resolve or reject. This effectively froze the sync implementation. From testing I noticed that the WebSocket would change from the opening
state to the closed
state without emitting any error. The standard implementation of WebsocketClientTransport
only listens to the open
and close
events. This fork here now also listens to the close
event and rejects this connection attempt - this allows the sync implementation to continue retrying connections.
Interestingly enough, only the first connection attempt (after closing the PowerSync service) has this behaviour. Subsequent connection attempts do emit an error
event.
Overview
TL;DR: Improves connection performance when performing multiple calls to
connect
in rapid succession.We currently protect the logic in
connect
calls with an exclusive mutex/lock. This mutex causes invocations toconnect
to be queued.Certain use-cases such as updating client parameters can result in rapid calls to
connect
when client parameters change. The mutex queuing can cause undesirable delays to this process. Each call toconnect
results in a connection attempt which needs to be awaited before proceeding to the next attempt.The work here retains a single connection attempt being active at a time - which prevents possible race conditions. However, instead of queuing all requests sequentially, the system now buffers and consolidates multiple calls, always using the most recent connection parameters.
Implementation
Key Changes
The
AbstractPowerSyncDatabase
implementation has been improved to track pending asynchronous connect and disconnect operations.pendingConnectionOptions
- Buffers the latest connection parametersconnectingPromise
- Tracks active connection attempts to prevent racesdisconnectingPromise
- Coordinates disconnect operationsConnection Behavior
The timing of
connect
calls greatly affects the outcome of resultant internalconnect
operations. Callingconnect
periodically where the period is longer than the time required to connect will still result in a serial queue of connection attempts.Connect calls made with a period less than the time taken to connect or disconnect will be compacted into less requests.
Scenario 1: Rapid Successive Calls During Initialization
Calling
.connect
multiple times while the SyncStream implementation is initializating will result in the SyncStream implementation making a single request with the latest parameters. Subsequent calls toconnect
, after the SyncStream has been initialized, will result in a reconnect with the latest (potentially compacted) options.Scenario 2: Updates During Active Disconnect
disconnect
operations are shared internally. Callingconnect
multiple times re-uses a single disconnect operation internally. The latest options are buffered while the disconnect is in progress.Edge Cases Handled
runExclusive
Performance Comparison
Test Results: 21 Rapid
connect()
CallsBefore (main branch)
21 sync stream implementations created, each attempting connection before disposal
After (this PR)
Only 3 sync implementations created with intelligent parameter consolidation
Breaking Changes
None - this is a pure performance and efficiency improvement with identical public API behavior.
Testing
This can be tested by using the following development packages.