Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EXECUTION_SWEEP never completing #21471

Closed
BojanZelic opened this issue Jan 30, 2025 · 4 comments
Closed

EXECUTION_SWEEP never completing #21471

BojanZelic opened this issue Jan 30, 2025 · 4 comments
Assignees
Labels
area/job-services kind/question more-info-needed The issue author need to provide more details and context to the issue

Comments

@BojanZelic
Copy link

is there a way to debug why EXECUTION_SWEEP never completes?

Image

I've tried the steps here #21283 (comment) to flush the redis db but nothing ever completes. The EXECUTION_SWEEP job never finishes. Are there any steps for debugging why this job never completes?

I don't see any relevant log entries for sweep that indicates any errors;

kubectl logs -n harbor harbor-jobservice-fd7666b57-ctwkl  | grep -i sweep
2025-01-28T20:46:32Z [INFO] [/jobservice/worker/cworker/c_worker.go:445]: Register job *task.SweepJob with name EXECUTION_SWEEP
2025-01-28T20:46:32Z [INFO] [/pkg/jobmonitor/redis.go:165]: unpause job EXECUTION_SWEEP
2025-01-28T20:46:32Z [INFO] [/jobservice/logger/sweeper_controller.go:121]: 1240 outdated log entries are sweepped by sweeper *sweeper.DBSweeper
@Vad1mo
Copy link
Member

Vad1mo commented Feb 4, 2025

chnage the state in the DB

@stonezdj
Copy link
Contributor

You should check it with the following steps.

  1. Check if any worker is processing any task related to EXECUTION_SWEEP by checking the log button.
  2. Select any running execution related to EXECUTION_SWEEP
docker exec -it harbor-db bash
psql -U postgres -d registry
select * from execution where vendor_type = 'EXECUTION_SWEEP' and status = 'RUNNING' order by start_time desc limit 10;
select job_id from task where execution_id = <execution_id>

Check the logs in /data/job_log/<job_id>.log
3. Because the queue latency is 52 hours, maybe the previous execution might be cleaned up and can not be found in the DB, you can just grep EXECUTION_SWEEP under /data/job_log to find the latest execution sweep job log.

@reasonerjt reasonerjt added area/job-services more-info-needed The issue author need to provide more details and context to the issue labels Feb 10, 2025
@reasonerjt
Copy link
Contributor

Please let us know what version of Harbor you are using.

@BojanZelic
Copy link
Author

I was on the latest version v2.12.2... anyways, problem seems to be fixed thanks for the suggestions. I cleared out the existing executions and tasks;

 delete from task where execution_id IN (select id from execution where vendor_type = 'EXECUTION_SWEEP' and status = 'Running');
 delete from execution where vendor_type = 'EXECUTION_SWEEP' and status = 'Running';

and in-combination with clearing redis, seems to have resolved it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/job-services kind/question more-info-needed The issue author need to provide more details and context to the issue
Projects
None yet
Development

No branches or pull requests

5 participants