You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
From a discord discussion.
Mutations and execution of a Stage is always on the machine LibAFL runs at. For very slow targets it may be beneficial to offload the actual executions to stateless worker.
Describe the solution you'd like
We could add a RemoteWorkerLauncherStage that builds n work packages, each including a next scheduled corpus entry, all metadata for this Testcase, the current feedback state, as well as additional random corpus entries for splicing.
The work package should then be posted to Redis or some other queue db (very much like celery, whatever a rust alternative is)
Each worker node grabs a work package, marks it as being worked on, creates a new Corpus from it and fuzzes this package for one iteration, then eventually posts the results.
Inside the main node, the RemoteWorkerCollectorStage will (when called) look up new results, and add potentially new Metadata and Corpus entries to its own shared queue.
The text was updated successfully, but these errors were encountered:
Can I discuss doubts about this enhancement on the discord? as I would like to work on this enhancement and also like to explore how the stateless launcher will be implemented as what we provide now is stateful fuzzing and how differentiate between slow and fast targets?
Is your feature request related to a problem? Please describe.
From a discord discussion.
Mutations and execution of a Stage is always on the machine LibAFL runs at. For very slow targets it may be beneficial to offload the actual executions to stateless worker.
Describe the solution you'd like
We could add a
RemoteWorkerLauncherStage
that buildsn
work packages, each including a next scheduled corpus entry, all metadata for thisTestcase
, the current feedback state, as well as additional random corpus entries for splicing.The work package should then be posted to Redis or some other queue db (very much like celery, whatever a rust alternative is)
Each worker node grabs a work package, marks it as being worked on, creates a new Corpus from it and fuzzes this package for one iteration, then eventually posts the results.
Inside the main node, the
RemoteWorkerCollectorStage
will (when called) look up new results, and add potentially new Metadata and Corpus entries to its own shared queue.The text was updated successfully, but these errors were encountered: