Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beacon sync recalibrate blocks queue #3099

Merged
merged 5 commits into from
Feb 20, 2025
Merged

Conversation

mjfh
Copy link
Contributor

@mjfh mjfh commented Feb 20, 2025

No description provided.

mjfh added 5 commits February 20, 2025 09:40
why:
  Was fall back for the case that the DB table was inaccessible before
  `FC` module reorg.
why:
  Provides useful settings, e.g. for memory debugging
why:
  Old queue setup provided a staging area which was much too large
  consuming too much idle memory. Also the command-line re-calibrating
  for debugging was much too complicated.

  And the naming for the old setup was wrong: There is no max queue
  size. Rather there is a HWM where filling the queue stops when reached.

  The currently tested size allows for 1.5k blocks on the queue.
@mjfh mjfh marked this pull request as ready for review February 20, 2025 18:16
@mjfh mjfh merged commit 38413d2 into master Feb 20, 2025
8 checks passed
@mjfh mjfh deleted the Beacon-sync-recalibrate-blocks-queue branch February 20, 2025 18:29
@@ -147,8 +147,8 @@ func blocksStagedCanImportOk*(ctx: BeaconCtxRef): bool =
if ctx.pool.nBuddies == 0:
return true

# Start importing if the queue is long enough.
if ctx.pool.blocksStagedQuLenMax <= ctx.blk.staged.len:
# Start importing if the queue is filled up enough
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why wait here? 1 block should be enough to start importing, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When importing while peers are still downloading you will most certainly run into timeouts so loosing the peers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how is this different when you have more than N blocks in the queue already? ie is it ok to download and process blocks in parallel when there are many blocks resting in the queue but not when there are few?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, when did you last test this? there were many bugs in the peer management itself as well as performance issues that have been fixed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It proved not ok downloading and importing at the same time. The chance to loose peers is too high, so grabbing everything while they are there seems to be the wise approach.

Copy link
Contributor Author

@mjfh mjfh Feb 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can give it a try again to do it simultaneously, importing and downloading. We'll see (as you see these are mere flags to be set.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants