Reply gAI is an AI clone for any X profile. It automatically collects a user's Tweets, stores them in long-term memory, and uses Retrieval-Augmented Generation (RAG) to generate responses that match their unique writing style and viewpoints.
One option for accessing Twitter/X data is the Arcade API toolkit.
Set API keys for the LLM of choice (Anthropic API) along with the Arcade API:
export ANTHROPIC_API_KEY=<your_anthropic_api_key>
export ARCADE_API_KEY=<your_arcade_api_key>
export ARCADE_USER_ID=<your_arcade_user_id>
Clone the repository and launch the assistant with the LangGraph server:
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/langchain-ai/reply_gAI.git
cd reply_gAI
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev
You should see the following output and Studio will open in your browser:
- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API Docs: http://127.0.0.1:2024/docs
In the configuration
tab, add the Twitter/X handle of any user:
Then, just interact with a chatbot persona for that user:
Reply gAI uses LangGraph to create a workflow that mimics a Twitter user's writing style:
-
Tweet Collection
- Uses the Arcade API X Toolkit to fetch Tweets over the past 7 days from a specified Twitter user
- Tweets are stored in the LangGraph Server's memory store
- The system automatically refreshes tweets if they're older than the configured age limit
-
Conversation Flow
- The workflow is managed by a state graph with two main nodes:
get_tweets
: Fetches and stores recent tweetschat
: Generates responses using Claude 3.5 Sonnet
- The workflow is managed by a state graph with two main nodes:
-
Response Generation
- This uses RAG to condition responses based upon the user's Tweets stored in memory
- Currently, it loads all tweets into memory, but semantic search from the LangGraph Server's memory store is also supported
- The LLM analyzes the collected tweets to understand the user's writing style
- It generates contextually appropriate responses that match the personality and tone of the target Twitter user
The system automatically determines whether to fetch new tweets or use existing ones based on their age, ensuring responses are generated using recent and relevant data.
In the quickstart, we use a locally running LangGraph server.
This uses the langraph dev
command, which launches the server in development mode.
Tweets are saved to the LangGraph store, which uses Postgres for persistence and is saved in the .langgraph_api/
folder in this directory.
You can visualize Tweets saved per each user in the Store directly with LangGraph Studio:
If you want to want to launch the server in a mode suitable for production, you can consider LangGraph Cloud:
- Add
LANGSMITH_API_KEY
to your.env
file. - Ensure Docker is running on your machine.
- Run with
langgraph up
luvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph up
See Module 6 of LangChain Academy for a detailed walkthrough of deployment options with LangGraph.