On this page, you will find some options to configure your deployment:
- Configuring Language settings
- Configuring AOAI content filters
- Setting Custom Names for Resources
- Applying Tags to All Resources
- Bringing Your Own Resources
- Accessing Data Ingest function using AI Search Managed Identity
- Extending Enteprise RAG components
Note on Environment Variables
Most of the customizations described on this page involve the use of environment variables. Therefore, it's worth noting the following about using azd
environment variables:
- By utilizing the
azd env
to set environment variables, you can specify resource names for each environment. - If you work across multiple devices, you can take advantage of
azd
's support for remote environments. This feature allows you to save your environment settings in Azure Storage and restore them on any device.
Enterprise RAG leverages Large Language Models (LLMs) and supports multiple languages by default. However, it provides parameters to fine-tune the language settings across its three main components. For detailed instructions, refer to Configuring Language Settings.
Provisioning an Azure OpenAI resource with azd
automatically creates a content filtering profile with a default severity threshold (Medium) for all content harm categories (Hate, Violence, Sexual, Self-Harm) and assigns it to the provisioned Azure OpenAI model through a post-deployment script. If you wish to customize these settings to be more or less restrictive, please refer to the Customize Content Filtering Policies page.
By default, azd
will automatically generate a unique name for each resource. The unique name is created based on the azd-environment name, the subscription name and the location. However, you can also manually define the name for each resource as described in Customizing resources names.
The main.parameters.json contains an empty object where you can define tags to apply to all your resources before you run azd up
or azd provision
. Look for the entry:
"deploymentTags":{
"value": {}
}
Define your tags as "key":value
, for example:
"deploymentTags":{
"value": {
"business-unit": "foo",
"cost-center": "bar"
}
}
While you are defining your deployment tags, you can create your own environment mappings (in case you want to set different tag's values per environment). For example:
Creating your own azd-env mapping:
"deploymentTags":{
"value": {
"business-unit": "${MY_DEPLOYMENT_BUSINESS_UNIT}",
"cost-center": "${COST_CENTER}"
}
}
Then, define the values for your environment:
azd env set MY_DEPLOYMENT_BUSINESS_UNIT foo
azd env set COST_CENTER bar
Note: Since the input parameter is an object, azd won't prompt the user for a value if the env-var is not set (how it happens when the input argument is a string). The values would be resolved and applied as empty strings when missing.
In some cases, you may want to use one or more pre-existing resources in your subscription instead of creating new ones. Our Bicep template allows you to do this. For detailed instructions on how this can be achieved, please take a look at the Bring Your Own Resources page.
In the AI Search indexing process, a skillset incorporates a custom web app skill. This skill is powered by the data ingestion Azure Function, which is responsible for chunking the data. By default, the AI Search service establishes a connection with the Azure Function via an API key.
However, for enhanced security and simplified credentials management, you have the option to utilize a managed identity for this connection. To switch to using a managed identity, simply set the environment variable AZURE_SEARCH_USE_MIS
to true
.
azd env set AZURE_SEARCH_USE_MIS true
After setting this variable, you need to deploy again using the azd up command.
azd up
Important: In order for the data ingestion function to be accessed with a managed identity, it needs to be configured to use Microsoft Entra Sign-in, as indicated in this link.
Azd automatically provisions the infrastructure and deploys the three components. However, you may want to change and customize parts of the code to meet a specific requirement.
A simple customization within the orchestrator component involves updating the bot description. This adjustment can help in more accurately defining the bot's scope for the orchestrator.
That said, if you want to manually deploy and customize the components, you can follow the deployment instructions for each component:
1) Data Ingestion Component
Fork or copy the original Data ingestion repo template to create your data ingestion git repo and follow the steps in its What if I want to redeploy just the ingestion component? section to learn how to redeploy the component.
If you want to run the component locally, which is interesting for testing your modifications before deploying, check out the Running Locally with VS Code section in the component's repository.
2) Orchestrator Component
Fork or copy the original Orchestrator repo template to create your orchestrator git repo and follow the steps in its Cloud Deployment section to learn how to redeploy the component.
If you want to run the component locally, which is interesting for testing your modifications before deploying, check out the Running Locally with VS Code section in the component's repository.
3) Front-end Component
Fork or copy the original App Front-end repo template to create your own frontend git repo and follow the steps in its Deploy (quickstart) section to learn how to redeploy the component.
If you want to run the component locally, which is interesting for testing your modifications before deploying, check out the Test locally section in the component's repository.
(Optional) Integrate custom component repo to the main gpt-rag
Customizing the components of your project allows for a tailored experience, but gpt-rag
solution repository won't automatically detect your custom component repos.
Integrating your custom component repository with the gpt-rag project enhances workflow efficiency, allowing you to directly use azd commands like azd up
and azd deploy
within the gpt-rag repository.
To achieve this integration, simply follow these steps:
-
Create Your Own
gpt-rag
Repository: Start by forking or copying the originalgpt-rag
repository. This will be the foundation for integrating your custom components. -
Point to Your Custom Component Repositories:
- Navigate to the
scripts
folder within your newly createdgpt-rag
repository. - Open and edit the
fetchComponents.ps1
andfetchComponents.sh
scripts. - Adjust these scripts to reference your custom component repositories, replacing the original repository links.
- Navigate to the
-
Initialize Your Customized Setup:
- With your
gpt-rag
repository scripts pointing to your component repositories, initialize the environment. - Run the
azd init -t <owner>/<repository>
using your own github org and repository.
- With your