Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Scale Backends for connector sink when writing static partition tables #41564

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

mxdzs0612
Copy link
Contributor

Why I'm doing:

What I'm doing:

By default, all backends will take part in connector sinking all from the start, leading to small files when there is little data. This PR scales up backends one by one for static partition tables, which can cope with this problem.

todo: add result pictures after #40540 is merged.

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Does this PR entail a change in behavior?

  • Yes, this PR will result in a change in behavior.
  • No, this PR will not result in a change in behavior.

If yes, please specify the type of change:

  • Interface/UI changes: syntax, type conversion, expression evaluation, display information
  • Parameter changes: default values, similar parameters but with different default values
  • Policy changes: use new policy to replace old one, functionality automatically enabled
  • Feature removed
  • Miscellaneous: upgrade & downgrade compatibility, etc.

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function
  • This is a backport pr

Bugfix cherry-pick branch check:

  • I have checked the version labels which the pr will be auto-backported to the target branch
    • 3.2
    • 3.1
    • 3.0
    • 2.5

@mxdzs0612 mxdzs0612 force-pushed the sink_shuffle branch 5 times, most recently from 13657e5 to d3d331e Compare March 4, 2024 03:23
@mxdzs0612 mxdzs0612 closed this Mar 4, 2024
@mxdzs0612 mxdzs0612 reopened this Mar 4, 2024
@mxdzs0612 mxdzs0612 force-pushed the sink_shuffle branch 2 times, most recently from ce0331c to 75b62ea Compare March 5, 2024 04:33
@mxdzs0612 mxdzs0612 marked this pull request as ready for review March 5, 2024 10:36
@mxdzs0612 mxdzs0612 requested review from a team as code owners March 5, 2024 10:36
@mxdzs0612 mxdzs0612 marked this pull request as draft March 6, 2024 02:25
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Signed-off-by: Jiao Mingye <[email protected]>
Copy link

sonarqubecloud bot commented Mar 7, 2024

Copy link

github-actions bot commented Mar 7, 2024

[FE Incremental Coverage Report]

pass : 20 / 23 (86.96%)

file detail

path covered_line new_line coverage not_covered_line_detail
🔵 com/starrocks/qe/SessionVariable.java 2 4 50.00% [2622, 3482]
🔵 com/starrocks/sql/InsertPlanner.java 11 12 91.67% [804]
🔵 com/starrocks/catalog/HiveTable.java 4 4 100.00% []
🔵 com/starrocks/planner/DataPartition.java 2 2 100.00% []
🔵 com/starrocks/sql/plan/PlanFragmentBuilder.java 1 1 100.00% []

Copy link

github-actions bot commented Mar 7, 2024

[BE Incremental Coverage Report]

pass : 38 / 42 (90.48%)

file detail

path covered_line new_line coverage not_covered_line_detail
🔵 be/src/exec/pipeline/exchange/exchange_sink_operator.cpp 36 40 90.00% [569, 572, 574, 589]
🔵 be/src/exec/pipeline/exchange/sink_buffer.h 2 2 100.00% []

@mxdzs0612 mxdzs0612 marked this pull request as ready for review March 7, 2024 04:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant