-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sqs: Optimize for concurrency #226
Conversation
Here is the result of another load test with dynamic concurrency settings (+1 receiver|processor|deleter every 10s), virtually unlimited buffers (size > total messages in the SQS queue), and TriggerMesh's default CPU limit ( Things become relatively unstable beyond 6 receiver|processor|deleter, which is 3 per thread. The default limit of |
The same load test as above, but this time without CPU limit. This time the performance remains quite stable until 12 receiver|processor|deleter (6 per thread) but figures did not double compared to the previous experiment, because CloudEvents are still sent sequentially after messages are received from the queue. Like what I observed before, the CPU usage remains below |
@sebgoa the decision is now about defining the default per Pod. Do we want to
I would personally vote for 3 but follow-up with auto-scaling (#230), and maybe also #227 to spawn/terminate goroutines dynamically based on the number of messages being received. |
Rewrite of the source adapter to spawn concurrent message processors instead of executing a single loop sequentially. The number of receivers, senders and deleters is based on the number of available CPU cores.
Reopens #225, which was accidentally merged (I reverted it).
Closes #222