Skip to content

codebytes/agent-openai-python-prompty

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Creative Writing Assistant: Working with Agents using Promptflow (Python Implementation)

Open in GitHub Codespaces Open in Dev Containers

This sample demonstrates how to create and work with AI agents driven by Azure OpenAI. It includes a Flask app that takes a topic and instruction from a user then calls a research agent that uses the Bing Search API to research the topic, a product agent that uses Azure AI Search to do a semantic similarity search for related products from a vectore store, a writer agent to combine the research and product information into a helpful article, and an editor agent to refine the article that's finally presented to the user.

Table of Contents

Features

This project template provides the following features:

Architecture Digram

Azure account requirements

IMPORTANT: In order to deploy and run this example, you'll need:

Opening the project

You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally.

GitHub Codespaces

  1. You can run this template virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

    Open in GitHub Codespaces

  2. Open a terminal window.

  3. Sign in to your Azure account:

    azd auth login
  4. Provision the resources and deploy the code:

    azd up

    This project uses gpt-35-turbo-0613 and gpt-4-1106-Preview which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly. For this project we recommend East US 2.

  5. Install the necessary Python packages:

    src/api
    pip install -r requirements.txt
    

Once the above steps are completed you can jump straight to testing the sample.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

  4. Install required packages:

    cd src/api
    pip install -r requirements.txt

    Once you've completed these steps jump to deployment.

Local environment

Prerequisites

Note for Windows users: If you are not using a container to run this sample, our hooks are currently all shell scripts. To provision this sample correctly while we work on updates we recommend using git bash.

Initializing the project

  1. Create a new folder and switch to it in the terminal, then run this command to download the project code:

    azd init -t agent-openai-python-prompty

    Note that this command will initialize a git repository, so you do not need to clone this repository.

  2. Install required packages:

    cd src/api
    pip install -r requirements.txt

Deployment

Once you've opened the project in Codespaces, Dev Containers, or locally, you can deploy it to Azure.

  1. Sign in to your Azure account:

    azd auth login

    If you have any issues with that command, you may also want to try azd auth login --use-device-code.

    This will create a folder under .azure/ in your project to store the configuration for this deployment. You may have multiple azd environments if desired.

  2. Provision the resources and deploy the code:

    azd up

    This project uses gpt-35-turbo-0613 and gpt-4-1106-Preview which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly. We recommend using East US 2 for this project.

    After running azd up, you may be asked the following question during Github Setup:

    Do you want to configure a GitHub action to automatically deploy this repo to Azure when you push code changes?
    (Y/n) Y

    You should respond with N, as this is not a necessary step, and take some time to setup.

Testing the sample

This sample repository contains an agents folder that includes subfolders for each agent. Each agent forlder contains a prompty file where the agents prompty is defined and a python file with the code used to run it. Exploring these files will help you understand what each agent is doing. The agents folder also contains an orchestrator.py file that can be used to run the entire flow and to create an article.

To test the sample:

  1. Populate the Azure AI Search vectore store index with product data.

    • Change into the api/data folder:
    cd src/api/data
    
    • Install the Jupyter extension.
    • Once the extension has been installed, open the create-azure-search.ipynb notebook. We will use this notebook to upload a catalogue of products to the Azure AI Search vector store. Click select kernel in the top right hand corner of the notebook, choose Python environment and then select the recommended Python version.
    • Run all of the cells in the notebook. If this process was successful you should see "uploading 20 documents to index contoso-products". You're now ready to run the full promptflow.
  2. Run the example web app locally using a Flask server.

    First navigate to the src/api folder

    cd ..
    

    Run the Flask webserver

    flask --debug --app api.app:app run --port 8080
    

    Then in a new terminal, navigate to the web folder

    cd src/web
    

    First install node packages:

    npm install
    

    Then run the web app with a local dev web server:

    npm run dev
    

    This will launch the app, where you can use example context and instructions to get started. On the 'Creative Team' page you can examine the output of each agent by clicking on it. The app should look like this:

    The getting started tab to send your instructions and context to the prompt:

    getting started

    The creative team tab that let's you follow and understand the agents workflow:

    creative team

    The document tab that displays the article that was created:

    generated article

    Change the instructions and context to create an article of your choice.

  3. Testing directly with Python using the orchestrator Logic

    To run the sample using just the orchestrator logic use the following command:

    cd ..
    python -m api.agents.orchestrator
    
    

    You can also pass context and instructions directly to the flask get_article api by running:

    http://127.0.0.1:8080/get_article?context=Write an article about camping in alaska&instructions=find specifics about what type of gear they would need and explain in detail
    

Evaluating prompt flow results

To understand how well our prompt flow performs using defined metrics like groundedness, coherence etc we can evaluate the results. To evaluate the prompt flow, we need to be able to compare it to what we see as "good results" in order to understand how well it aligns with our expectations.

We may be able to evaluate the flow manually (e.g., using Azure AI Studio) but for now, we'll evaluate this by running the prompt flow using gpt-4 and comparing our performance to the results obtained there. To do this, follow the instructions and steps in the notebook evaluate-chat-prompt-flow.ipynb under the eval folder.

You can also view the evaluation metrics by running the following commands.

Run evaluation:

cd evaluate
python evaluate.py

Costs

Pricing may vary per region and usage. Exact costs cannot be estimated. You may try the Azure pricing calculator for the resources below:

  • Azure Container Apps: Pay-as-you-go tier. Costs based on vCPU and memory used. Pricing
  • Azure OpenAI: Standard tier, GPT and Ada models. Pricing per 1K tokens used, and at least 1K tokens are used per question. Pricing
  • Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing

Security Guidelines

This template use Managed Identity built in to eliminate the need for developers to manage these credentials. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. We also use Key Vault, specifically for Bing Search, since Managed Identity is currently not implemented for it. Additionally, we have added a GitHub Action tool that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure best practices in your repo we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repos.

Resources

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources:

For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

About

A creative writing multi-agent solution to help users write articles.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 33.0%
  • HCL 25.1%
  • TypeScript 18.1%
  • Jupyter Notebook 10.2%
  • Shell 9.2%
  • Dockerfile 2.7%
  • Other 1.7%