Build your conversation-based search with LLM, support DeepResearch / DeepSeek R1.
- 🔍 New! Support "Deep Research" like OpenAI/Gemini/Perplexity.
- Built-in support for LLM: OpenAI, Google, Lepton, DeepSeek(R1), SiliconFlow, AliYun, Baidu, ChatGLM, Moonshot, Tencent, Lepton, Yi and more.
- Support Ollama, LMStudio
- Built-in support for search engine: Bing, Google, Tavily, SearXNG
- Customizable pretty UI interface
- Support light&dark mode/mobile
- Support i18n
- Support Continue Q&A with contexts.
- Support Cache results, Force reload.
- Support images search.
Support "Deep Research" like OpenAI/Gemini/Perplexity, through search engine, web scraping and LLM to iterate on any topic or question, and generate a comprehensive report. Project reference deep-research, thanks to author dzhng.
deepresearch.mp4
Note:
- Warning: It will cost a lot of Tokens.
- Need to support
Function Calling
. - Use JINA.ai to extract web page content (No need to configure KEY, limited: 20RPM).
Workflow:
- Analyze user's query.
- Generate follow-up questions to refine the research direction.
- Generate and execute search queries.
- Process and analyze search results.
- Recursive exploration leads to deeper exploration base on
step 4
. - Generate a comprehensive report.
More details can be found in DeepResearch discussion.
docker pull docker.cnb.cool/aigc/aisearch
1.Get the code.
git clone https://github.com/yokingma/search_with_ai.git
cd search_with_ai
2.Edit .env.docker file. in deploy
directory.
After modifying the .env.docker file, restart the Docker container to apply changes.
You must set at least one KEY.
...
# OpenAI's key
OPENAI_KEY=#your key
# Searxng hostname.
SEARXNG_HOSTNAME=http://searxng:8080
3.Edit model.json file. [Optional]
{
"provider": "openai",
"type": "openai",
"baseURL": "https://api.openai.com/v1",
"models": ["o1-preview", "o1-mini", "gpt-4o", "gpt-4o-mini"]
}
4.Run with Docker Compose.
docker compose up -d
Then visit http://localhost:3000
5.Update
- Delete old images.
- Run
docker compose down
- Run
docker compose up -d
Built-in support for search engine: SearXNG, Bing, Google, Tavily, etc.
Install SearXNG with searxng-docker
Make sure to activate the json format to use the API. This can be done by adding the following line to the settings.yml file:
search:
formats:
- html
- json
And set limiter to false:
server:
limiter: false # default is true
apps/server/.env
:
# SEARXNG_HOSTNAME=<host>
To use the Bing Web Search API, please visit this link to obtain your Bing subscription key.
You have three options for Google Search: you can use the SearchApi Google Search API from SearchApi, Serper Google Search API from Serper, or opt for the Programmable Search Engine provided by Google.
Tavily is a search engine optimized for LLMs.
Jina Reader URL API, supporting full web content extraction. used in [DeepResearch] mode. JINA KEY is optional (limited to 20RPM).
# JINA API KEY
JINA_KEY=#your key
-
Node.js >= 20
-
Turborepo
-
PackageManager: [email protected]
-
Directory Structure
apps/
| server # backend
| web # frontend
deploy/
| docker-compose.yaml # docker deployment file
| .env.docker # backend configuration file
| model.json # backend model configuration file
...
- Development & Build In the root of the project:
turbo dev
# or
turbo build
- Update In the root of the project:
git pull
This repository's source code is available under the MIT License.