Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: lightningdevkit/rust-lightning
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: lightningdevkit/rust-lightning
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 0.0.125
Choose a head ref
  • 8 commits
  • 13 files changed
  • 3 contributors

Commits on Oct 14, 2024

  1. Set holder_commitment_point to Available on upgrade

    When we upgrade from LDK 0.0.123 or prior, we need to intialize
    `holder_commitment_point` with commitment point(s). In
    1f7f3a3 we changed the point(s)
    which we fetch from both the current and next per-commitment-point
    (setting the value to `HolderCommitmentPoint::Available` on
    upgrade) to only fetching the current per-commitment-point (setting
    the value to `HolderCommitmentPoint::PendingNext` on upgrade).
    
    In `commitment_signed` handling, we expect the next
    per-commitment-point to be available (allowing us to `advance()`
    the `holder_commitment_point`), as it was included in the
    `revoke_and_ack` we most recently sent to our peer, so must've been
    available at that time.
    
    Sadly, these two interact negatively with each other - on upgrade,
    assuming the channel is at a steady state and there are no pending
    updates, we'll not make the next per-commitment-point available but
    if we receive a `commitment_signed` from our peer we'll assume it
    is. As a result, in debug mode, we'll hit an assertion failure, and
    in production mode we'll force-close the channel.
    
    Instead, here, we fix the upgrade logic to always upgrade directly
    to `HolderCommitmentPoint::Available`, making the next
    per-commitment-point available immediately.
    
    We also attempt to resolve the next per-commitment-point in
    `get_channel_reestablish`, allowing any channels which were
    upgraded to LDK 0.0.124 and are in this broken state to avoid the
    force-closure, as long as they don't receive a `commitment_signed`
    in the interim.
    TheBlueMatt committed Oct 14, 2024
    Configuration menu
    Copy the full SHA
    68d27bc View commit details
    Browse the repository at this point in the history
  2. Don't bump the next_node_counter when using a removed counter

    If we manage to pull a `node_counter` from `removed_node_counters`
    for reuse, `add_channel_between_nodes` would `unwrap_or` with the
    `next_node_counter`-incremented value. This visually looks right,
    except `unwrap_or` is always called, causing us to always increment
    `next_node_counter` even if we don't use it.
    
    This will result in the `node_counter`s always growing any time we
    add a new node to our graph, leading to somewhat larger memory
    usage when routing and a debug assertion failure in
    `test_node_counter_consistency`.
    
    The fix is trivial, this is what `unwrap_or_else` is for.
    TheBlueMatt committed Oct 14, 2024
    Configuration menu
    Copy the full SHA
    acfc6d9 View commit details
    Browse the repository at this point in the history
  3. Fix synchronize_listeners calling default implementation

    Previously, the `ChainListenerSet` `Listen` implementation wouldn't
    forward to the listeners `block_connected` implementation outside of
    tests. This would result in the default implementation of
    `Listen::block_connected` being used and the listeners implementation
    never being called.
    tnull authored and TheBlueMatt committed Oct 14, 2024
    Configuration menu
    Copy the full SHA
    a675d48 View commit details
    Browse the repository at this point in the history
  4. Don't interpret decayed data as we've failed to send tiny values

    When we're calculating the success probability for min-/max-bucket
    pairs and are looking at the 0th' min-bucket, we only look at the
    highest max-bucket to decide the success probability. We ignore
    max-buckets which have a value below `BUCKET_FIXED_POINT_ONE` to
    only consider values which aren't substantially decayed.
    
    However, if all of our data is substantially decayed, this filter
    causes us to conclude that the highest max-bucket is bucket zero
    even though we really should then be looking at any bucket.
    
    We make this change here, looking at the highest non-zero
    max-bucket if no max-buckets have a value above
    `BUCKET_FIXED_POINT_ONE`.
    TheBlueMatt committed Oct 14, 2024
    Configuration menu
    Copy the full SHA
    e48074a View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    a6834eb View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    f0f1b22 View commit details
    Browse the repository at this point in the history
  7. Configuration menu
    Copy the full SHA
    2028c78 View commit details
    Browse the repository at this point in the history
  8. Merge pull request #3362 from TheBlueMatt/2024-10-0.0.125

    Cut 0.0.125 with a few bugfixes
    TheBlueMatt authored Oct 14, 2024
    Configuration menu
    Copy the full SHA
    e80d632 View commit details
    Browse the repository at this point in the history
Loading