Skip to content

langchain-ai/reply_gAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

reply gAI

Reply gAI is an AI clone for any X profile. It automatically collects a user's Tweets, stores them in long-term memory, and uses Retrieval-Augmented Generation (RAG) to generate responses that match their unique writing style and viewpoints.

reply_gai

🚀 Quickstart

One option for accessing Twitter/X data is the Arcade API toolkit.

Set API keys for the LLM of choice (Anthropic API) along with the Arcade API:

export ANTHROPIC_API_KEY=<your_anthropic_api_key>
export ARCADE_API_KEY=<your_arcade_api_key>
export ARCADE_USER_ID=<your_arcade_user_id>

Clone the repository and launch the assistant with the LangGraph server:

curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/langchain-ai/reply_gAI.git
cd reply_gAI
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev

You should see the following output and Studio will open in your browser:

In the configuration tab, add the Twitter/X handle of any user:

Screenshot 2024-12-11 at 1 30 51 PM

Then, just interact with a chatbot persona for that user:

Screenshot 2024-12-11 at 1 30 30 PM

How it works

Reply gAI uses LangGraph to create a workflow that mimics a Twitter user's writing style:

  1. Tweet Collection

    • Uses the Arcade API X Toolkit to fetch Tweets over the past 7 days from a specified Twitter user
    • Tweets are stored in the LangGraph Server's memory store
    • The system automatically refreshes tweets if they're older than the configured age limit
  2. Conversation Flow

    • The workflow is managed by a state graph with two main nodes:
      • get_tweets: Fetches and stores recent tweets
      • chat: Generates responses using Claude 3.5 Sonnet
  3. Response Generation

    • This uses RAG to condition responses based upon the user's Tweets stored in memory
    • Currently, it loads all tweets into memory, but semantic search from the LangGraph Server's memory store is also supported
    • The LLM analyzes the collected tweets to understand the user's writing style
    • It generates contextually appropriate responses that match the personality and tone of the target Twitter user

The system automatically determines whether to fetch new tweets or use existing ones based on their age, ensuring responses are generated using recent and relevant data.

Long-term memory

In the quickstart, we use a locally running LangGraph server.

This uses the langraph dev command, which launches the server in development mode.

Tweets are saved to the LangGraph store, which uses Postgres for persistence and is saved in the .langgraph_api/ folder in this directory.

You can visualize Tweets saved per each user in the Store directly with LangGraph Studio:

Screenshot 2024-12-11 at 1 31 09 PM

Deployment

If you want to want to launch the server in a mode suitable for production, you can consider LangGraph Cloud:

luvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph up

See Module 6 of LangChain Academy for a detailed walkthrough of deployment options with LangGraph.

About

An AI Clone For Any X Profile

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published