diff --git a/README.md b/README.md index 9d8595b65adb..501429e093b4 100644 --- a/README.md +++ b/README.md @@ -19,20 +19,25 @@ https://user-images.githubusercontent.com/22963551/228855501-2f5777cf-755b-4407-

💖 Help Fund Auto-GPT's Development 💖

-If you can spare a coffee, you can help to cover the API costs of developing Auto-GPT and help push the boundaries of fully autonomous AI! -A full day of development can easily cost as much as $20 in API costs, which for a free project is quite limiting. +If you can spare a coffee, you can help to cover the costs of developing Auto-GPT and help push the boundaries of fully autonomous AI! Your support is greatly appreciated +Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here.

+ +

Enterprise Sponsors

- Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here. +InfluxData    Roost.AI    NucleiAI    AlgohashFe    

-

Individual Sponsors

+

Monthly Sponsors

robinicus  prompthero  crizzler  tob-le-rone  FSTatSBS  toverly1  ddtarazona  Nalhos  Kazamario  pingbotan  indoor47  AuroraHolding  kreativai  hunteraraujo  Explorergt92  judegomila   thepok   SpacingLily  merwanehamadi  m  zkonduit  maxxflyer  tekelsey  digisomni  nocodeclarity  tjarmain -

+avy-ai  garythebat  Web3Capital  throb  shawnharmsen  MediConCenHK  Mobivs  GalaxyVideoAgency  quintendf  RThaweewat  SwftCoins  MBassi91  Odin519Tomas  Dradstone  lucas-chu  joaomdmoura  comet-ml  sultanmeghji  Brodie0  fabrietech  omphos  ZERO-A-ONE  jazgarewal  vkozacek  ternary5  josephcmiller2  ikarosai  DailyBotHQ  belharethsami  DataMetis  st617  cfarquhar  ColinConwell  Pythagora-io  dwcar49us  KiaArmani  lmaugustin  MetaPath01  scryptedinc  nicoguyon  refinery1  johnculkin  Cameron-Fulton  mathewhawkins  Mr-Bishop42  rejunity  caitlynmeeks  allenstecat  Daniel1357  rapidstartup  sunchongren  marv-technology  TheStoneMX  concreit  AryaXAI  abhinav-pandey29  tob-le-rone  angiaou  rickscode  RealChrisSean  thisisjeffchen  tommygeee  CrypteorCapital  kMag410  ChrisDMT  jd3655  rocks6  webbcolton  projectonegames  jun784  fruition  txtr99  

+ + + ## Table of Contents @@ -48,20 +53,19 @@ Your support is greatly appreciated - [Docker](#docker) - [Command Line Arguments](#command-line-arguments) - [🗣️ Speech Mode](#️-speech-mode) - - [List of IDs with names from eleven labs, you can use the name or ID:](#list-of-ids-with-names-from-eleven-labs-you-can-use-the-name-or-id) - - [OpenAI API Keys Configuration](#openai-api-keys-configuration) - [🔍 Google API Keys Configuration](#-google-api-keys-configuration) - [Setting up environment variables](#setting-up-environment-variables) - [Memory Backend Setup](#memory-backend-setup) - [Redis Setup](#redis-setup) - [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup) - [Milvus Setup](#milvus-setup) + - [Setting up environment variables](#setting-up-environment-variables-1) + - [Setting Your Cache Type](#setting-your-cache-type) - [View Memory Usage](#view-memory-usage) - [🧠 Memory pre-seeding](#-memory-pre-seeding) - [💀 Continuous Mode ⚠️](#-continuous-mode-️) - [GPT3.5 ONLY Mode](#gpt35-only-mode) - [🖼 Image Generation](#-image-generation) - - [Selenium](#selenium) - [⚠️ Limitations](#️-limitations) - [🛡 Disclaimer](#-disclaimer) - [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter) @@ -263,18 +267,19 @@ export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY" export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID" ``` -## Memory Backend Setup +## Setting Your Cache Type + +By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone. -By default, Auto-GPT is going to use LocalCache. To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want: -- `local` (default) uses a local JSON cache file -- `pinecone` uses the Pinecone.io account you configured in your ENV settings -- `redis` will use the redis cache that you configured -- `milvus` will use the milvus that you configured +* `local` (default) uses a local JSON cache file +* `pinecone` uses the Pinecone.io account you configured in your ENV settings +* `redis` will use the redis cache that you configured +* `milvus` will use the milvus cache that you configured +* `weaviate` will use the weaviate cache that you configured ### Redis Setup - > _**CAUTION**_ \ This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all 1. Install docker desktop @@ -336,8 +341,20 @@ export PINECONE_ENV="" # e.g: "us-east4-gcp" export MEMORY_BACKEND="pinecone" ``` -## Weaviate Setup +### Milvus Setup + +[Milvus](https://milvus.io/) is a open-source, high scalable vector database to storage huge amount of vector-based memory and provide fast relevant search. + +- setup milvus database, keep your pymilvus version and milvus version same to avoid compatible issues. + - setup by open source [Install Milvus](https://milvus.io/docs/install_standalone-operator.md) + - or setup by [Zilliz Cloud](https://zilliz.com/cloud) +- set `MILVUS_ADDR` in `.env` to your milvus address `host:ip`. +- set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend. +- optional + - set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name. + +### Weaviate Setup [Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store data objects and vector embeddings from ML-models and scales seamlessly to billion of data objects. [An instance of Weaviate can be created locally (using Docker), on Kubernetes or using Weaviate Cloud Services](https://weaviate.io/developers/weaviate/quickstart). Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) is supported which allows the Auto-GPT process itself to start a Weaviate instance. To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip install "weaviate-client>=3.15.4"`. @@ -369,7 +386,7 @@ MEMORY_INDEX="Autogpt" # name of the index to create for the application - set `MEMORY_BACKEND` in `.env` to `milvus` to enable milvus as backend. - optional - set `MILVUS_COLLECTION` in `.env` to change milvus collection name as you want, `autogpt` is the default name. - + ## View Memory Usage 1. View memory usage by using the `--debug` flag :) @@ -377,8 +394,7 @@ MEMORY_INDEX="Autogpt" # name of the index to create for the application ## 🧠 Memory pre-seeding - python autogpt/data_ingestion.py -h - +# python autogpt/data_ingestion.py -h usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH] Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script. @@ -391,8 +407,7 @@ options: --overlap OVERLAP The overlap size between chunks when ingesting files (default: 200) --max_length MAX_LENGTH The max_length of each chunk when ingesting files (default: 4000) - python autogpt/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000 - +# python autogpt/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000 This script located at autogpt/data_ingestion.py, allows you to ingest files into memory and pre-seed it before running Auto-GPT. Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI's memory so that it can use this information to generate more informed and accurate responses.