Give instruction to agent, so that it has direction about completing an objective #10
Replies: 2 comments
-
This can be given to a "task analyzer" agent that is given a prompt that includes the original goals and available tools, along with instructions to analyze the goal and tools available, then break it down into subtasks. We can request that the output conform to a certain format and then use that output to queue and run the subtasks. Example from a test I ran on a js-based agent: async generateSubTaskPrompt() {
const headers = this.promptProvider.header.system
const { goals, tools } = headers
let subTaskPromptMessages = []
let subTaskSystemPrompt = [
'You are part of a larger system of self-instructing LLM agents tasked with a main goal to achieve.',
'Your role is that of a task analyzer agent that breaks down goals into subtask for other agents.',
'Each subtask can only have one action/tool/command.'
]
const subTaskSystemMsg = {
role: 'system',
content: `${subTaskSystemPrompt.join('\n')}\n${tools()}\n${goals()}` // list of tools available (command, args), list of goals set (array of strings)
}
const typedef = `/**
* @typedef {Object} SubTaskItem
* @property {string} taskId - Unique identifier for the task
* @property {string} action - Command name
* @property {Object} args - Arguments for the command (key-value pairs)
* @property {string} reason - The reason for this subtask
* @property {string} request - The expected result
* @property {Array<string|null>} dependencies - array of taskId that needs to be completed, if any
*/`
const subTaskUserMsg = {
role: 'user',
content: `Respond only with a JSON array of subtask objects with the following type definition with no additional commentary:\n${typedef}`
}
subTaskPromptMessages.push(subTaskSystemMsg)
subTaskPromptMessages.push(subTaskUserMsg)
return subTaskPromptMessages
} Given the following list of goals... const goals = [
'Find out what github projects and other research has been done in LLM token management.',
'Find out what methods those projects use to manage token usage and context fidelity with LLM generative AI models.',
'Browse any relevant page to locate sample code.',
'Generate a report on the findings of the previous two goals to send to the user in markdown.',
'Generate sample code to send to the user.'
] ...with the temperature set to 0, model [
{
"taskId": "1",
"action": "webSearchWorker",
"args": {
"query": "LLM token management github projects research",
"numResults": 10
},
"reason": "To find out what github projects and other research has been done in LLM token management",
"request": "JSON array of relevant github projects and research",
"dependencies": []
},
{
"taskId": "2",
"action": "webSearchWorker",
"args": {
"query": "LLM token usage and context fidelity methods in generative AI models",
"numResults": 10
},
"reason": "To find out what methods do projects use to manage token usage and context fidelity with LLM generative AI models",
"request": "JSON array of relevant methods used in projects",
"dependencies": []
},
{
"taskId": "3",
"action": "webSearchWorker",
"args": {
"query": "LLM generative AI models sample code",
"numResults": 10
},
"reason": "To browse any relevant page to locate sample code",
"request": "JSON array of relevant sample code",
"dependencies": []
},
{
"taskId": "4",
"action": "sendReport",
"args": {
"text": "Markdown report on the findings of the previous two goals"
},
"reason": "To generate a report on the findings of the previous two goals to send to the user in markdown",
"request": "Confirmation that the report has been sent",
"dependencies": [
"1",
"2"
]
},
{
"taskId": "5",
"action": "webScraperWorker",
"args": {
"url": "https://github.com/LLNL/LLM",
"question": "What are the token management methods used in LLM?"
},
"reason": "To generate sample code to send to the user",
"request": "JSON array of relevant sample code",
"dependencies": [
"1",
"2",
"3"
]
}
] Then it's just a matter of having a function take that output and use it as input to either run tools/commands directly or spawn a dedicated autonomous agent to complete specific subtasks. |
Beta Was this translation helpful? Give feedback.
-
Where in the code are you setting the temperature to 0? I can't work out what the default is, but it seems to be something like 0.7. I've changed a couple of lines to 0.4 (with the thinking being that a more deterministic setting would be better than creative to prevent the agents from going off the rails). Interesting that you say you set the temperature to 0 – is that just for testing this specifically, or is that your usual settings for the app? |
Beta Was this translation helpful? Give feedback.
-
Currently agents have to think through every objective from first principles, if we can give a way for user to add some instructions about solving a task, it would make it complete the objective much more effectively with less iterations
It's like objective is : go from A to B
And instruction is like asking someone for directions
Beta Was this translation helpful? Give feedback.
All reactions